WorldWideScience

Sample records for high classification accuracy

  1. Classification Accuracy Is Not Enough

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically diff...... classification accuracy obscures the aim of MGR: to select labels indistinguishable from those a person would choose....

  2. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  3. Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.

    Directory of Open Access Journals (Sweden)

    Andre F Marquand

    Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.

  4. Novel speech signal processing algorithms for high-accuracy classification of Parkinson's disease.

    Science.gov (United States)

    Tsanas, Athanasios; Little, Max A; McSharry, Patrick E; Spielman, Jennifer; Ramig, Lorraine O

    2012-05-01

    There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.

  5. Strategies to Increase Accuracy in Text Classification

    NARCIS (Netherlands)

    D. Blommesteijn (Dennis)

    2014-01-01

    htmlabstractText classification via supervised learning involves various steps from processing raw data, features extraction to training and validating classifiers. Within these steps implementation decisions are critical to the resulting classifier accuracy. This paper contains a report of the

  6. Strategies to Increase Accuracy in Text Classification

    NARCIS (Netherlands)

    Blommesteijn, D.

    2014-01-01

    Text classification via supervised learning involves various steps from processing raw data, features extraction to training and validating classifiers. Within these steps implementation decisions are critical to the resulting classifier accuracy. This paper contains a report of the study performed

  7. Research on the classification result and accuracy of building windows in high resolution satellite images: take the typical rural buildings in Guangxi, China, as an example

    Science.gov (United States)

    Li, Baishou; Gao, Yujiu

    2015-12-01

    The information extracted from the high spatial resolution remote sensing images has become one of the important data sources of the GIS large scale spatial database updating. The realization of the building information monitoring using the high resolution remote sensing, building small scale information extracting and its quality analyzing has become an important precondition for the applying of the high-resolution satellite image information, because of the large amount of regional high spatial resolution satellite image data. In this paper, a clustering segmentation classification evaluation method for the high resolution satellite images of the typical rural buildings is proposed based on the traditional KMeans clustering algorithm. The factors of separability and building density were used for describing image classification characteristics of clustering window. The sensitivity of the factors influenced the clustering result was studied from the perspective of the separability between high image itself target and background spectrum. This study showed that the number of the sample contents is the important influencing factor to the clustering accuracy and performance, the pixel ratio of the objects in images and the separation factor can be used to determine the specific impact of cluster-window subsets on the clustering accuracy, and the count of window target pixels (Nw) does not alone affect clustering accuracy. The result can provide effective research reference for the quality assessment of the segmentation and classification of high spatial resolution remote sensing images.

  8. Enhancing Accuracy of Plant Leaf Classification Techniques

    Directory of Open Access Journals (Sweden)

    C. S. Sumathi

    2014-03-01

    Full Text Available Plants have become an important source of energy, and are a fundamental piece in the puzzle to solve the problem of global warming. Living beings also depend on plants for their food, hence it is of great importance to know about the plants growing around us and to preserve them. Automatic plant leaf classification is widely researched. This paper investigates the efficiency of learning algorithms of MLP for plant leaf classification. Incremental back propagation, Levenberg–Marquardt and batch propagation learning algorithms are investigated. Plant leaf images are examined using three different Multi-Layer Perceptron (MLP modelling techniques. Back propagation done in batch manner increases the accuracy of plant leaf classification. Results reveal that batch training is faster and more accurate than MLP with incremental training and Levenberg– Marquardt based learning for plant leaf classification. Various levels of semi-batch training used on 9 species of 15 sample each, a total of 135 instances show a roughly linear increase in classification accuracy.

  9. Classification accuracy analyses using Shannon’s Entropy

    Directory of Open Access Journals (Sweden)

    Shashi Poonam Indwar

    2014-11-01

    Full Text Available There are many methods for determining the Classification Accuracy. In this paper significance of Entropy of training signatures in Classification has been shown. Entropy of training signatures of the raw digital image represents the heterogeneity of the brightness values of the pixels in different bands. This implies that an image comprising a homogeneous lu/lc category will be associated with nearly the same reflectance values that would result in the occurrence of a very low entropy value. On the other hand an image characterized by the occurrence of diverse lu/lc categories will consist of largely differing reflectance values due to which the entropy of such image would be relatively high. This concept leads to analyses of classification accuracy. Although Entropy has been used many times in RS and GIS but its use in determination of classification accuracy is new approach.

  10. Classification of textures in satellite image with Gabor filters and a multi layer perceptron with back propagation algorithm obtaining high accuracy

    Directory of Open Access Journals (Sweden)

    Adriano Beluco, Paulo M. Engel, Alexandre Beluco

    2015-01-01

    Full Text Available The classification of images, in many cases, is applied to identify an alphanumeric string, a facial expression or any other characteristic. In the case of satellite images is necessary to classify all the pixels of the image. This article describes a supervised classification method for remote sensing images that integrates the importance of attributes in selecting features with the efficiency of artificial neural networks in the classification process, resulting in high accuracy for real images. The method consists of a texture segmentation based on Gabor filtering followed by an image classification itself with an application of a multi layer artificial neural network with a back propagation algorithm. The method was first applied to a synthetic image, like training, and then applied to a satellite image. Some results of experiments are presented in detail and discussed. The application of the method to the synthetic image resulted in the identification of 89.05% of the pixels of the image, while applying to the satellite image resulted in the identification of 85.15% of the pixels. The result for the satellite image can be considered a result of high accuracy.

  11. Expected Classification Accuracy using the Latent Distribution

    Directory of Open Access Journals (Sweden)

    Fanmin Guo

    2006-10-01

    Full Text Available Rudner (2001, 2005 proposed a method for evaluating classification accuracy in tests based on item response theory (IRT. In this paper, a latent distribution method is developed. For comparison, both methods are applied to a set of real data from a state test. While the latent distribution method relaxes several of the assumptions needed to apply Rudner's method, both approaches yield extremely comparable results. A simplified approach for applying Rudner's method and a short SPSS routine are presented.

  12. A Nonparametric Approach to Estimate Classification Accuracy and Consistency

    Science.gov (United States)

    Lathrop, Quinn N.; Cheng, Ying

    2014-01-01

    When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…

  13. Consistency of accuracy assessment indices for soft classification: Simulation analysis

    Science.gov (United States)

    Chen, Jin; Zhu, Xiaolin; Imura, Hidefumi; Chen, Xuehong

    Accuracy assessment plays a crucial role in the implementation of soft classification. Even though many indices of accuracy assessment for soft classification have been proposed, the consistencies among these indices are not clear, and the impact of sample size on these consistencies has not been investigated. This paper examines two kinds of indices: map-level indices, including root mean square error ( rmse), kappa, and overall accuracy ( oa) from the sub-pixel confusion matrix (SCM); and category-level indices, including crmse, user accuracy ( ua) and producer accuracy ( pa). A careful simulation was conducted to investigate the consistency of these indices and the effect of sample size. The major findings were as follows: (1) The map-level indices are highly consistent with each other, whereas the category-level indices are not. (2) The consistency among map-level and category-level indices becomes weaker when the sample size decreases. (3) The rmse is more affected by error distribution among classes than are kappa and oa. Based on these results, we recommend that rmse can be used for map-level accuracy due to its simplicity, although kappa and oa may be better alternatives when the sample size is limited because the two indices are affected less by the error distribution among classes. We also suggest that crmse should be provided when map users are not concerned about the error source, whereas ua and pa are more useful when the complete information about different errors is required. The results of this study will be of benefit to the development and application of soft classifiers.

  14. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  15. Assessing Uncertainty in LULC Classification Accuracy by Using Bootstrap Resampling

    Directory of Open Access Journals (Sweden)

    Lin-Hsuan Hsiao

    2016-08-01

    Full Text Available Supervised land-use/land-cover (LULC classifications are typically conducted using class assignment rules derived from a set of multiclass training samples. Consequently, classification accuracy varies with the training data set and is thus associated with uncertainty. In this study, we propose a bootstrap resampling and reclassification approach that can be applied for assessing not only the uncertainty in classification results of the bootstrap-training data sets, but also the classification uncertainty of individual pixels in the study area. Two measures of pixel-specific classification uncertainty, namely the maximum class probability and Shannon entropy, were derived from the class probability vector of individual pixels and used for the identification of unclassified pixels. Unclassified pixels that are identified using the traditional chi-square threshold technique represent outliers of individual LULC classes, but they are not necessarily associated with higher classification uncertainty. By contrast, unclassified pixels identified using the equal-likelihood technique are associated with higher classification uncertainty and they mostly occur on or near the borders of different land-cover.

  16. A Visual mining based framework for classification accuracy estimation

    Science.gov (United States)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  17. Classification Accuracy of Neural Networks with PCA in Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Novakovic Jasmina

    2011-04-01

    Full Text Available This paper presents classification accuracy of neural network with principal component analysis (PCA for feature selections in emotion recognition using facial expressions. Dimensionality reduction of a feature set is a common preprocessing step used for pattern recognition and classification applications. PCA is one of the popular methods used, and can be shown to be optimal using different optimality criteria. Experiment results, in which we achieved a recognition rate of approximately 85% when testing six emotions on benchmark image data set, show that neural networks with PCA is effective in emotion recognition using facial expressions.

  18. Impact of spatial resolution on correlation between segmentation evaluation metrics and forest classification accuracy

    Science.gov (United States)

    Švab Lenarčič, Andreja; Ritlop, Klemen; Äńurić, Nataša.; Čotar, Klemen; Oštir, Krištof

    2015-10-01

    Slovenia is one of the most forested countries in Europe. Its forest management authorities need information about the forest extent and state, as their responsibility lies in forest observation and preservation. Together with appropriate geographic information system mapping methods the remotely sensed data represent essential tool for an effective and sustainable forest management. Despite the large data availability, suitable mapping methods still present big challenge in terms of their speed which is often affected by the huge amount of data. The speed of the classification method could be maximised, if each of the steps in object-based classification was automated. However, automation is hard to achieve, since segmentation requires choosing optimum parameter values for optimal classification results. This paper focuses on the analysis of segmentation and classification performance and their correlation in a range of segmentation parameter values applied in the segmentation step. In order to find out which spatial resolution is still suitable for forest classification, forest classification accuracies obtained by using four images with different spatial resolutions were compared. Results of this study indicate that all high or very high spatial resolutions are suitable for optimal forest segmentation and classification, as long as appropriate scale and merge parameters combinations are used in the object-based classification. If computation interval includes all segmentation parameter combinations, all segmentation-classification correlations are spatial resolution independent and are generally high. If computation interval includes over- or optimal-segmentation parameter combinations, most segmentation-classification correlations are spatial resolution dependent.

  19. USE SATELLITE IMAGES AND IMPROVE THE ACCURACY OF HYPERSPECTRAL IMAGE WITH THE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Javadi

    2015-12-01

    Full Text Available The best technique to extract information from remotely sensed image is classification. The problem of traditional classification methods is that each pixel is assigned to a single class by presuming all pixels within the image. Mixed pixel classification or spectral unmixing, is a process that extracts the proportions of the pure components of each mixed pixel. This approach is called spectral unmixing. Hyper spectral images have higher spectral resolution than multispectral images. In this paper, pixel-based classification methods such as the spectral angle mapper, maximum likelihood classification and subpixel classification method (linear spectral unmixing were implemented on the AVIRIS hyper spectral images. Then, pixel-based and subpixel based classification algorithms were compared. Also, the capabilities and advantages of spectral linear unmixing method were investigated. The spectral unmixing method that implemented here is an effective technique for classifying a hyperspectral image giving the classification accuracy about 89%. The results of classification when applying on the original images are not good because some of the hyperspectral image bands are subject to absorption and they contain only little signal. So it is necessary to prepare the data at the beginning of the process. The bands can be stored according to their variance. In bands with a high variance, we can distinguish the features from each other in a better mode in order to increase the accuracy of classification. Also, applying the MNF transformation on the hyperspectral images increase the individual classes accuracy of pixel based classification methods as well as unmixing method about 20 percent and 9 percent respectively.

  20. High accuracy flexural hinge development

    Science.gov (United States)

    Santos, I.; Ortiz de Zárate, I.; Migliorero, G.

    2005-07-01

    This document provides a synthesis of the technical results obtained in the frame of the HAFHA (High Accuracy Flexural Hinge Assembly) development performed by SENER (in charge of design, development, manufacturing and testing at component and mechanism levels) with EADS Astrium as subcontractor (in charge of doing an inventory of candidate applications among existing and emerging projects, establishing the requirements and perform system level testing) under ESA contract. The purpose of this project has been to develop a competitive technology for a flexural pivot, usuable in highly accurate and dynamic pointing/scanning mechanisms. Compared with other solutions (e.g. magnetic or ball bearing technologies) flexural hinges are the appropriate technology for guiding with accuracy a mobile payload over a limited angular ranges around one rotation axes.

  1. The effect of atmospheric and topographic correction methods on land cover classification accuracy

    Science.gov (United States)

    Vanonckelen, Steven; Lhermitte, Stefaan; Van Rompaey, Anton

    2013-10-01

    of different land cover types in the landscape. Finally, coupled correction methods showed most efficient on weakly illuminated slopes. After correction, accuracies in the low illumination zone (cos β ≤ 0.65) were improved more than in the moderate and high illumination zones. Considering all results, best overall classification results were achieved after combination of the transmittance function correction with pixel-based Minnaert or pixel-based C-topographic correction. Furthermore, results of this bi-temporal study indicated that the topographic component had a higher influence on classification accuracy than the atmospheric component and that it is worthwhile to invest in both atmospheric and topographic corrections in a multi-temporal study.

  2. Classification of features selected through Optimum Index Factor (OIF)for improving classification accuracy

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kaushal

    2011-01-01

    The present investigation was performed to determine if the features selected through Optimum Index Factor (OIF) could provide improved classification accuracy of the various categories on the satellite images of the individual years as well as stacked images of two different years as compared to all the features considered together. Further, in order to determine if there occurs increase in the classification accuracy of the different categories with corresponding increase in the OIF values of the features extracted from both the individual years' and stacked images, we performed linear regression between the producer's accuracy (PA) of the various categories with the OIF values of the different combinations of the features. The investigations demonstrated that there occurs significant improvement in the PA of two impervious categories viz. moderate built-up and low density built-up determined from the classification of the bands and principal components associated with the highest OIF value as compared to all the bands and principal components for both the individual years' and stacked images respectively. Regression analyses exhibited positive trends between the regression coefficients and OIF values forthe various categories determined for the individual years' and stacked images respectively signifying the prevalence of direct relationship between the increase in the information content with corresponding increase in the OIF values. The research proved that features extracted through OIF from both the individual years' and stacked images are capable of providing significantly improved PA as compared to all the features pooled together.

  3. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    Science.gov (United States)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  4. Classification Accuracy of Sequentially Administered WAIS-IV Short Forms.

    Science.gov (United States)

    Ryan, Joseph J; Kreiner, David S; Gontkovsky, Samuel T; Glass Umfleet, Laura

    2015-01-01

    A Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) short form (SF) may be effective for ruling out subnormal intelligence. To create a useful SF, subtest administration should follow the order prescribed in the manual and, depending upon individual performance, be terminated after completion of 2, 3, 4, or 5 subtests. One hundred and twenty-two patients completed the WAIS-IV. In two analyses, Full-Scale IQs (FSIQs) ≤69 and ≤79 were classified as impairment. Classification accuracy statistics indicated that all SFs using both cutoff scores exceeded the base rate (i.e., 14% and 34%) of subnormal intelligence, with hit rates ranging from 84% to 95%. The FSIQ cutoff of ≤69 had poor sensitivity for detecting impaired intellectual functioning with the 2-, 3-, 4-, and 5-subtest SFs; specificity, positive predictive value (PPV), and negative predictive value (NPV) were excellent for each SF. With the FSIQ cutoff of ≤79, sensitivity was strong to excellent for the 3-, 4-, and 5-subtest SFs as were specificity, PPV, and NPV.

  5. Evaluation criteria for software classification inventories, accuracies, and maps

    Science.gov (United States)

    Jayroe, R. R., Jr.

    1976-01-01

    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  6. Developing an efficient technique for satellite image denoising and resolution enhancement for improving classification accuracy

    Science.gov (United States)

    Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja

    2013-01-01

    Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.

  7. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    Science.gov (United States)

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  8. Optimal region growing segmentation and its effect on classification accuracy

    NARCIS (Netherlands)

    Gao, Y.; Mas, J.F.; Kerle, N.; Navarrete Pacheco, J.A.

    2011-01-01

    Image segmentation is a preliminary and critical step in object-based image classification. Its proper evaluation ensures that the best segmentation is used in image classification. In this article, image segmentations with nine different parameter settings were carried out with a multi-spectral Lan

  9. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    accuracy for single 2D images being higher than 90% for the first time. In particular, the classification accuracy for the easily confused endomembrane compartments (endoplasmic reticulum, Golgi, endosomes, lysosomes was improved by 5–15%. We achieved further improvements when classification was conducted on image sets rather than on individual cell images. Conclusions The availability of accurate, fast, automated classification systems for protein location patterns in conjunction with high throughput fluorescence microscope imaging techniques enables a new subfield of proteomics, location proteomics. The accuracy and sensitivity of this approach represents an important alternative to low-resolution assignments by curation or sequence-based prediction.

  10. Does Maximizing Information at the Cut Score Always Maximize Classification Accuracy and Consistency?

    Science.gov (United States)

    Wyse, Adam E.; Babcock, Ben

    2016-01-01

    A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…

  11. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by cons

  12. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  13. Assessing the Accuracy of Prediction Algorithms for Classification

    DEFF Research Database (Denmark)

    Baldi, P.; Brunak, Søren; Chauvin, Y.

    2000-01-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, ann correlation coefficients, and to information theoretic measures such as relative entropy and mutual...

  14. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  15. Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery

    Directory of Open Access Journals (Sweden)

    Chinsu Lin

    2015-05-01

    Full Text Available Changes of Land Use and Land Cover (LULC affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification. Object-Based Support Vector Machine (OB-SVM and Pixel-Based Maximum Likelihood Classifier (PB-MLC were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldView-2 multispectral and panchromatic images.

  16. High Accuracy Imaging Polarimetry with NICMOS

    CERN Document Server

    Batcheldor, D; Hines, D C; Schmidt, G D; Axon, D J; Robinson, A; Sparks, W; Tadhunter, C

    2008-01-01

    The ability of NICMOS to perform high accuracy polarimetry is currently hampered by an uncalibrated residual instrumental polarization at a level of 1.2-1.5%. To better quantify and characterize this residual we obtained observations of three polarimetric standard stars at three separate space-craft roll angles. Combined with archival data, these observations were used to characterize the residual instrumental polarization to enable NICMOS to reach its full polarimetric potential. Using these data, we calculate values of the parallel transmission coefficients that reproduce the ground-based results for the polarimetric standards. The uncertainties associated with the parallel transmission coefficients, a result of the photometric repeatability of the observations, dominate the accuracy of p and theta. However, the new coefficients now enable imaging polarimetry of targets with p~1.0% at an accuracy of +/-0.6% and +/-15 degrees.

  17. High accuracy FIONA-AFM hybrid imaging.

    Science.gov (United States)

    Fronczek, D N; Quammen, C; Wang, H; Kisker, C; Superfine, R; Taylor, R; Erie, D A; Tessmer, I

    2011-04-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes.

  18. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  19. High accuracy 3-D laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...

  20. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    Science.gov (United States)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  1. The Potential Impact of Not Being Able to Create Parallel Tests on Expected Classification Accuracy

    Science.gov (United States)

    Wyse, Adam E.

    2011-01-01

    In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…

  2. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    Science.gov (United States)

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  3. Klatskin tumors and the accuracy of the Bismuth-Corlette classification.

    Science.gov (United States)

    Paul, Andreas; Kaiser, Gernot M; Molmenti, Ernesto P; Schroeder, Tobias; Vernadakis, Spiridon; Oezcelik, Arzu; Baba, Hideo A; Cicinnati, Vito R; Sotiropoulos, Georgios C

    2011-12-01

    The Bismuth-Corlette (BC) classification is the current preoperative standard to assess hilar cholangiocarcinomas (HC). The aim of this study is to evaluate the accuracy, sensitivity, and prognostic value of the BC classification. Data of patients undergoing resection for HC were analyzed. Endoscopic retrograde cholangiography and standard computed tomography were undertaken in all cases. Additional 3D-CT-reconstructions, magnetic resonance imaging, and percutaneous transhepatic cholangiography were obtained in selected patients. A systematic review and meta-analysis of the literature was performed. Ninety patients underwent resection of the hilar bile duct confluence, with right or left hemihepatectomy in 68 instances. The overall accuracy of the BC classification was 48 per cent. Rates of BC under- and over-estimation were 29 per cent and 23 per cent, respectively. The addition of MRI, 3D-CT-reconstructions, or percutaneous transhepatic cholangiography improved the accuracy to 49 per cent (P = 1.0), 53 per cent (P = 0.074), and 64 per cent (P < 0.001), respectively. Lowest sensitivity rates were for BC Type IIIA/IIIB tumors. Meta-analysis of published BC data corresponding to 540 patients did not reach significance. The BC classification has low accuracy and no prognostic value in cases of HC undergoing resection.

  4. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Science.gov (United States)

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  5. Associations between psychologists' thinking styles and accuracy on a diagnostic classification task

    NARCIS (Netherlands)

    Aarts, A.A.; Witteman, C.L.M.; Souren, P.M.; Egger, J.I.M.

    2012-01-01

    The present study investigated whether individual differences between psychologists in thinking styles are associated with accuracy in diagnostic classification. We asked novice and experienced clinicians to classify two clinical cases of clients with two co-occurring psychological disorders. No sig

  6. Classification Accuracy of Nonword Repetition when Used with Preschool-Age Spanish-Speaking Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2013-01-01

    Purpose: The purpose of the present study was to (a) describe and compare the nonword repetition (NWR) performance of preschool-age Spanish-speaking children (3- to 5-year-olds) with and without language impairment (LI) across 2 scoring approaches and (b) to contrast the classification accuracy of a Spanish NWR task when item-level and percentage…

  7. Examining the Classification Accuracy of a Vocabulary Screening Measure with Preschool Children

    Science.gov (United States)

    Marcotte, Amanda M.; Clemens, Nathan H.; Parker, Christopher; Whitcomb, Sara A.

    2016-01-01

    This study investigated the classification accuracy of the "Dynamic Indicators of Vocabulary Skills" (DIVS) as a preschool vocabulary screening measure. With a sample of 240 preschoolers, fall and winter DIVS scores were used to predict year-end vocabulary risk using the 25th percentile on the "Peabody Picture Vocabulary Test--Third…

  8. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    Science.gov (United States)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  9. Verbal fluency indicators of malingering in traumatic brain injury: classification accuracy in known groups.

    Science.gov (United States)

    Curtis, Kelly L; Thompson, Laura K; Greve, Kevin W; Bianchini, Kevin J

    2008-09-01

    A known-groups design was used to determine the classification accuracy of verbal fluency variables in detecting Malingered Neurocognitive Dysfunction (MND) in traumatic brain injury (TBI). Participants were 204 TBI and 488 general clinical patients. The Slick et al. (1999) criteria were used to classify the TBI patients into non-MND and MND groups. An educationally corrected FAS Total Correct word T-score proved to be the most accurate of the several verbal fluency indicators examined. Classification accuracy of this variable at specific cutoffs is presented in a cumulative frequency table. This variable accurately differentiated non-MND from MND mild TBI patients but its accuracy was unacceptable in moderate/severe TBI. The clinical application of these findings is discussed.

  10. Combining atlas based segmentation and intensity classification with nearest neighbor transform and accuracy weighted vote.

    Science.gov (United States)

    Sdika, Michaël

    2010-04-01

    In this paper, different methods to improve atlas based segmentation are presented. The first technique is a new mapping of the labels of an atlas consistent with a given intensity classification segmentation. This new mapping combines the two segmentations using the nearest neighbor transform and is especially effective for complex and folded regions like the cortex where the registration is difficult. Then, in a multi atlas context, an original weighting is introduced to combine the segmentation of several atlases using a voting procedure. This weighting is derived from statistical classification theory and is computed offline using the atlases as a training dataset. Concretely, the accuracy map of each atlas is computed and the vote is weighted by the accuracy of the atlases. Numerical experiments have been performed on publicly available in vivo datasets and show that, when used together, the two techniques provide an important improvement of the segmentation accuracy.

  11. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    Science.gov (United States)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  12. Computing High Accuracy Power Spectra with Pico

    CERN Document Server

    Fendt, William A

    2007-01-01

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be...

  13. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    Science.gov (United States)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  14. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  15. Fast and High Accuracy Wire Scanner

    CERN Document Server

    Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A

    2009-01-01

    Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...

  16. EVALUATION OF DECISION TREE CLASSIFICATION ACCURACY TO MAP LAND COVER IN CAPIXABA, ACRE

    Directory of Open Access Journals (Sweden)

    Symone Maria de Melo Figueiredo

    2006-03-01

    Full Text Available This study evaluated the accuracy of mapping land cover in Capixaba, state of Acre, Brazil, using decision trees. Elevenattributes were used to build the decision trees: TM Landsat datafrom bands 1, 2, 3, 4, 5, and 7; fraction images derived from linearspectral unmixing; and the normalized difference vegetation index (NDVI. The Kappa values were greater than 0,83, producingexcellent classification results and demonstrating that the technique is promising for mapping land cover in the study area.

  17. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  18. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  19. High accuracy 3-D laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...... shutter is controlled in steps of 100 ps. Camera delay is controlled in steps of 100 ps. Each laser pulse triggers the camera delay and shutter. A 3-D image is constructed from a sequence of 50-100 2-D reflectivity images, where each frame integrates about 700 laser pulses on the CCD. In 50 Hz video mode...

  20. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements.

    Science.gov (United States)

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-09-01

    This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw placements.

  1. Improving accuracy for cancer classification with a new algorithm for genes selection

    Directory of Open Access Journals (Sweden)

    Zhang Hongyan

    2012-11-01

    Full Text Available Abstract Background Even though the classification of cancer tissue samples based on gene expression data has advanced considerably in recent years, it faces great challenges to improve accuracy. One of the challenges is to establish an effective method that can select a parsimonious set of relevant genes. So far, most methods for gene selection in literature focus on screening individual or pairs of genes without considering the possible interactions among genes. Here we introduce a new computational method named the Binary Matrix Shuffling Filter (BMSF. It not only overcomes the difficulty associated with the search schemes of traditional wrapper methods and overfitting problem in large dimensional search space but also takes potential gene interactions into account during gene selection. This method, coupled with Support Vector Machine (SVM for implementation, often selects very small number of genes for easy model interpretability. Results We applied our method to 9 two-class gene expression datasets involving human cancers. During the gene selection process, the set of genes to be kept in the model was recursively refined and repeatedly updated according to the effect of a given gene on the contributions of other genes in reference to their usefulness in cancer classification. The small number of informative genes selected from each dataset leads to significantly improved leave-one-out (LOOCV classification accuracy across all 9 datasets for multiple classifiers. Our method also exhibits broad generalization in the genes selected since multiple commonly used classifiers achieved either equivalent or much higher LOOCV accuracy than those reported in literature. Conclusions Evaluation of a gene’s contribution to binary cancer classification is better to be considered after adjusting for the joint effect of a large number of other genes. A computationally efficient search scheme was provided to perform effective search in the extensive

  2. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines

    Science.gov (United States)

    Sahadevan, Anand S.; Routray, Aurobinda; Das, Bhabani S.; Ahmad, Saquib

    2016-04-01

    Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS-Salinas, ROSIS-Pavia University, and AVIRIS-Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

  3. The Accuracy of Body Mass Index and Gallagher’s Classification in Detecting Obesity among Iranians

    Directory of Open Access Journals (Sweden)

    Alireza Shahab Jahanlou

    2016-07-01

    Full Text Available Background: The study was conducted to examine the comparability of the BMI and Gallagher’s classification in diagnosing obesity based on the cutoff points of the gold standards and to estimate suitable cutoff points for detecting obesity among Iranians. Methods: The cross-sectional study was comparative in nature. The sample consisted of 20,163 adults. The bioelectrical impedance analysis (BIA was used to measure the variables of interest. Sensitivity, specificity, positive predictive power (PPV, and negative predictive power (NPV were used to evaluate the comparability of the two classification methods in detecting obesity. Results: The BMI wrongly classified 29% of the obese persons as overweight. In both classifications, as age increased, the accuracy of detecting obesity decreased. The Gallagher’s classification is better than MBI in detecting obesity in men with the exception of those older than 59 years. In females, the BMI was better in determining sensitivity. In both classifications, either female or male, an increase in age was associated with a decrease in sensitivity and NPV with the exception of the BMI for the 18 year olds. Gallagher can correctly classify males and females who are less than 40 and 19 years old, respectively. Conclusion: Gallagher’s classification is recommended for non-obese in both sexes and in obese males younger than 40 years old. The BMI is recommended for obese females. The suitable cutoff points for the BMI to detect obesity are 27.70 kg/m2 for females and males, 27.70 kg/m2 for females, and 27.30 kg/m2 for males.

  4. Accuracy of automated classification of major depressive disorder as a function of symptom severity.

    Science.gov (United States)

    Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell

    2016-01-01

    Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  5. Adjusting for covariate effects on classification accuracy using the covariate-adjusted receiver operating characteristic curve.

    Science.gov (United States)

    Janes, Holly; Pepe, Margaret S

    2009-06-01

    Recent scientific and technological innovations have produced an abundance of potential markers that are being investigated for their use in disease screening and diagnosis. In evaluating these markers, it is often necessary to account for covariates associated with the marker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. In this paper, we propose the covariate-adjusted receiver operating characteristic curve, a measure of covariate-adjusted classification accuracy. Nonparametric and semiparametric estimators are proposed, asymptotic distribution theory is provided and finite sample performance is investigated. For illustration we characterize the age-adjusted discriminatory accuracy of prostate-specific antigen as a biomarker for prostate cancer.

  6. [Procedures for performing meta-analyses of the accuracy of tools for binary classification].

    Science.gov (United States)

    Botella, Juan; Huang, Huiling

    2012-02-01

    The assessment of accuracy in binary classification tools must take into account two non-independent rates: true positives and false positives. A variety of indices have been proposed. They have been estimated for tests employed for early detection or screening purposes. We summarize and review the main methods proposed for performing a meta-analysis that assesses the accuracy of this type of tools. They are applied to the results from 14 studies that report estimates of the accuracy of the AUDIT. The method of direct aggregation does not allow the use of meta-analytic procedures; the separate estimation of sensitivity and specificity does not acknowledge that they are not independent; the SROC method treats accuracy and threshold as fixed effects and has limitations to deal with the potential role of covariates. The Normal Bivariate (NB) model and the Hierarchical Summary ROC (HSROC) model are statistically rigorous and can deal with the covariates properly. They allowed analyzing the association between the gender composition of the sample and the way the test AUDIT behaves in the example.

  7. A SINGLE STEP SCHEME WITH HIGH ACCURACY FOR PARABOLIC PROBLEM

    Institute of Scientific and Technical Information of China (English)

    陈传淼; 胡志刚

    2001-01-01

    A single step scheme with high accuracy for solving parabolic problem is proposed. It is shown that this scheme possesses good stability and fourth order accuracy with respect to both time and space variables, which are superconvergent.

  8. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  9. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    Science.gov (United States)

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  10. High-Resolution Movement EEG Classification

    Directory of Open Access Journals (Sweden)

    Jakub Štastný

    2007-01-01

    Full Text Available The aim of the contribution is to analyze possibilities of high-resolution movement classification using human EEG. For this purpose, a database of the EEG recorded during right-thumb and little-finger fast flexion movements of the experimental subjects was created. The statistical analysis of the EEG was done on the subject's basis instead of the commonly used grand averaging. Statistically significant differences between the EEG accompanying movements of both fingers were found, extending the results of other so far published works. The classifier based on hidden Markov models was able to distinguish between movement and resting states (classification score of 94–100%, but it was unable to recognize the type of the movement. This is caused by the large fraction of other (nonmovement related EEG activities in the recorded signals. A classification method based on advanced EEG signal denoising is being currently developed to overcome this problem.

  11. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  12. Tradeoff between User Experience and BCI Classification Accuracy with Frequency Modulated Steady-State Visual Evoked Potentials.

    Science.gov (United States)

    Dreyer, Alexander M; Herrmann, Christoph S; Rieger, Jochem W

    2017-01-01

    Steady-state visual evoked potentials (SSVEPs) have been widely employed for the control of brain-computer interfaces (BCIs) because they are very robust, lead to high performance, and allow for a high number of commands. However, such flickering stimuli often also cause user discomfort and fatigue, especially when several light sources are used simultaneously. Different variations of SSVEP driving signals have been proposed to increase user comfort. Here, we investigate the suitability of frequency modulation of a high frequency carrier for SSVEP-BCIs. We compared BCI performance and user experience between frequency modulated (FM) and traditional sinusoidal (SIN) SSVEPs in an offline classification paradigm with four independently flickering light-emitting diodes which were overtly attended (fixated). While classification performance was slightly reduced with the FM stimuli, the user comfort was significantly increased. Comparing the SSVEPs for covert attention to the stimuli (without fixation) was not possible, as no reliable SSVEPs were evoked. Our results reveal that several, simultaneously flickering, light emitting diodes can be used to generate FM-SSVEPs with different frequencies and the resulting occipital electroencephalography (EEG) signals can be classified with high accuracy. While the performance we report could be further improved with adjusted stimuli and algorithms, we argue that the increased comfort is an important result and suggest the use of FM stimuli for future SSVEP-BCI applications.

  13. Methodology for high accuracy contact angle measurement.

    Science.gov (United States)

    Kalantarian, A; David, R; Neumann, A W

    2009-12-15

    A new version of axisymmetric drop shape analysis (ADSA) called ADSA-NA (ADSA-no apex) was developed for measuring interfacial properties for drop configurations without an apex. ADSA-NA facilitates contact angle measurements on drops with a capillary protruding into the drop. Thus a much simpler experimental setup, not involving formation of a complete drop from below through a hole in the test surface, may be used. The contact angles of long-chained alkanes on a commercial fluoropolymer, Teflon AF 1600, were measured using the new method. A new numerical scheme was incorporated into the image processing to improve the location of the contact points of the liquid meniscus with the solid substrate to subpixel resolution. The images acquired in the experiments were also analyzed by a different drop shape technique called theoretical image fitting analysis-axisymmetric interfaces (TIFA-AI). The results were compared with literature values obtained by means of the standard ADSA for sessile drops with the apex. Comparison of the results from ADSA-NA with those from TIFA-AI and ADSA reveals that, with different numerical strategies and experimental setups, contact angles can be measured with an accuracy of less than 0.2 degrees. Contact angles and surface tensions measured from drops with no apex, i.e., by means of ADSA-NA and TIFA-AI, were considerably less scattered than those from complete drops with apex. ADSA-NA was also used to explore sources of improvement in contact angle resolution. It was found that using an accurate value of surface tension as an input enhances the accuracy of contact angle measurements.

  14. Research On The Classification Of High Resolution Image Based On Object-oriented And Class Rule

    Science.gov (United States)

    Li, C. K.; Fang, W.; Dong, X. J.

    2015-06-01

    With the development of remote sensing technology, the spatial resolution, spectral resolution and time resolution of remote sensing data is greatly improved. How to efficiently process and interpret the massive high resolution remote sensing image data for ground objects, which with spatial geometry and texture information, has become the focus and difficulty in the field of remote sensing research. An object oriented and rule of the classification method of remote sensing data has presents in this paper. Through the discovery and mining the rich knowledge of spectrum and spatial characteristics of high-resolution remote sensing image, establish a multi-level network image object segmentation and classification structure of remote sensing image to achieve accurate and fast ground targets classification and accuracy assessment. Based on worldview-2 image data in the Zangnan area as a study object, using the object-oriented image classification method and rules to verify the experiment which is combination of the mean variance method, the maximum area method and the accuracy comparison to analysis, selected three kinds of optimal segmentation scale and established a multi-level image object network hierarchy for image classification experiments. The results show that the objectoriented rules classification method to classify the high resolution images, enabling the high resolution image classification results similar to the visual interpretation of the results and has higher classification accuracy. The overall accuracy and Kappa coefficient of the object-oriented rules classification method were 97.38%, 0.9673; compared with object-oriented SVM method, respectively higher than 6.23%, 0.078; compared with object-oriented KNN method, respectively more than 7.96%, 0.0996. The extraction precision and user accuracy of the building compared with object-oriented SVM method, respectively higher than 18.39%, 3.98%, respectively better than the object-oriented KNN method 21

  15. High accuracy GNSS based navigation in GEO

    Science.gov (United States)

    Capuano, Vincenzo; Shehaj, Endrit; Blunt, Paul; Botteron, Cyril; Farine, Pierre-André

    2017-07-01

    Although significant improvements in efficiency and performance of communication satellites have been achieved in the past decades, it is expected that the demand for new platforms in Geostationary Orbit (GEO) and for the On-Orbit Servicing (OOS) on the existing ones will continue to rise. Indeed, the GEO orbit is used for many applications including direct broadcast as well as communications. At the same time, Global Navigation Satellites System (GNSS), originally designed for land, maritime and air applications, has been successfully used as navigation system in Low Earth Orbit (LEO) and its further utilization for navigation of geosynchronous satellites becomes a viable alternative offering many advantages over present ground based methods. Following our previous studies of GNSS signal characteristics in Medium Earth Orbit (MEO), GEO and beyond, in this research we specifically investigate the processing of different GNSS signals, with the goal to determine the best navigation performance they can provide in a GEO mission. Firstly, a detailed selection among different GNSS signals and different combinations of them is discussed, taking into consideration the L1 and L5 frequency bands, and the GPS and Galileo constellations. Then, the implementation of an Orbital Filter is summarized, which adaptively fuses the GN1SS observations with an accurate orbital forces model. Finally, simulation tests of the navigation performance achievable by processing the selected combination of GNSS signals are carried out. The results obtained show an achievable positioning accuracy of less than one meter. In addition, hardware-in-the-loop tests are presented using a COTS receiver connected to our GNSS Spirent simulator, in order to collect real-time hardware-in-the-loop observations and process them by the proposed navigation module.

  16. Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth.

    Science.gov (United States)

    Valindria, Vanya V; Lavdas, Ioannis; Bai, Wenjia; Kamnitsas, Konstantinos; Aboagye, Eric O; Rockall, Andrea G; Rueckert, Daniel; Glocker, Ben

    2017-08-01

    When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.

  17. Reliability, Validity, and Classification Accuracy of the DSM-5 Diagnostic Criteria for Gambling Disorder and Comparison to DSM-IV.

    Science.gov (United States)

    Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C

    2016-09-01

    The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.

  18. Compact, High Accuracy CO2 Monitor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This Small Business Innovative Research Phase II proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...

  19. Compact, High Accuracy CO2 Monitor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This Small Business Innovative Research Phase I proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...

  20. Influence of the training set on the accuracy of surface EMG classification in dynamic contractions for the control of multifunction prostheses

    Directory of Open Access Journals (Sweden)

    Jiang Ning

    2011-05-01

    Full Text Available Abstract Background For high usability, myo-controlled devices require robust classification schemes during dynamic contractions. Therefore, this study investigates the impact of the training data set in the performance of several pattern recognition algorithms during dynamic contractions. Methods A 9 class experiment was designed involving both static and dynamic situations. The performance of various feature extraction methods and classifiers was evaluated in terms of classification accuracy. Results It is shown that, combined with a threshold to detect the onset of the contraction, current pattern recognition algorithms used on static conditions provide relatively high classification accuracy also on dynamic situations. Moreover, the performance of the pattern recognition algorithms tested significantly improved by optimizing the choice of the training set. Finally, the results also showed that rather simple approaches for classification of time domain features provide results comparable to more complex classification methods of wavelet features. Conclusions Non-stationary surface EMG signals recorded during dynamic contractions can be accurately classified for the control of multi-function prostheses.

  1. Accuracy and cut-off point selection in three-class classification problems using a generalization of the Youden index.

    Science.gov (United States)

    Nakas, Christos T; Alonzo, Todd A; Yiannoutsos, Constantin T

    2010-12-10

    We study properties of the index J(3), defined as the accuracy, or the maximum correct classification, for a given three-class classification problem. Specifically, using J(3) one can assess the discrimination between the three distributions and obtain an optimal pair of cut-off points c(1)sum of the correct classification proportions will be maximized. It also serves as the generalization of the Youden index in three-class problems. Parametric and non-parametric approaches for estimation and testing are considered and methods are applied to data from an MRS study on human immunodeficiency virus (HIV) patients.

  2. High speed high dynamic range high accuracy measurement system

    Energy Technology Data Exchange (ETDEWEB)

    Deibele, Craig E.; Curry, Douglas E.; Dickson, Richard W.; Xie, Zaipeng

    2016-11-29

    A measuring system includes an input that emulates a bandpass filter with no signal reflections. A directional coupler connected to the input passes the filtered input to electrically isolated measuring circuits. Each of the measuring circuits includes an amplifier that amplifies the signal through logarithmic functions. The output of the measuring system is an accurate high dynamic range measurement.

  3. Divisional compound hierarchical classification method for regionalization of high, medium and low yield croplands of China

    Science.gov (United States)

    Yuliang, Qiao; Ying, Wang; Jinchun, Liu

    This is an introduction to the method of classifying high, medium and low yield croplands by remote sensing and GIS, which is the result of a key project of The Scientific and Industry Technology Committee of National Defence. In the study, special information related to high, medium and low yield cropland was compounded with TM data. The development of the method of compound hierarchy classification improved accuracy of remote sensing classification greatly.

  4. Bayesian Analysis of High Dimensional Classification

    Science.gov (United States)

    Mukhopadhyay, Subhadeep; Liang, Faming

    2009-12-01

    Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.

  5. High accuracy in silico sulfotransferase models.

    Science.gov (United States)

    Cook, Ian; Wang, Ting; Falany, Charles N; Leyh, Thomas S

    2013-11-29

    Predicting enzymatic behavior in silico is an integral part of our efforts to understand biology. Hundreds of millions of compounds lie in targeted in silico libraries waiting for their metabolic potential to be discovered. In silico "enzymes" capable of accurately determining whether compounds can inhibit or react is often the missing piece in this endeavor. This problem has now been solved for the cytosolic sulfotransferases (SULTs). SULTs regulate the bioactivities of thousands of compounds--endogenous metabolites, drugs and other xenobiotics--by transferring the sulfuryl moiety (SO3) from 3'-phosphoadenosine 5'-phosphosulfate to the hydroxyls and primary amines of these acceptors. SULT1A1 and 2A1 catalyze the majority of sulfation that occurs during human Phase II metabolism. Here, recent insights into the structure and dynamics of SULT binding and reactivity are incorporated into in silico models of 1A1 and 2A1 that are used to identify substrates and inhibitors in a structurally diverse set of 1,455 high value compounds: the FDA-approved small molecule drugs. The SULT1A1 models predict 76 substrates. Of these, 53 were known substrates. Of the remaining 23, 21 were tested, and all were sulfated. The SULT2A1 models predict 22 substrates, 14 of which are known substrates. Of the remaining 8, 4 were tested, and all are substrates. The models proved to be 100% accurate in identifying substrates and made no false predictions at Kd thresholds of 100 μM. In total, 23 "new" drug substrates were identified, and new linkages to drug inhibitors are predicted. It now appears to be possible to accurately predict Phase II sulfonation in silico.

  6. High accuracy & long timescale light curves

    Directory of Open Access Journals (Sweden)

    Hodgkin S.

    2013-04-01

    Full Text Available We present a theoretical analysis of the optical light curves (LCs for short-period high-mass transiting extrasolar planet systems. Our method considers the primary transit, the secondary eclipse, and the overall phase shape of the LC between the occultations. Phase variations arise from (i reflected and thermally emitted light by the planet, (ii the ellipsoidal shape of the star due to the gravitational pull of the planet, and (iii the Doppler shift of the stellar light as the star orbits the center of mass of the system. Our full model of the out-of-eclipse variations contains information about the planetary mass, orbital eccentricity, the orientation of periastron and the planet's albedo. For a range of hypothetical systems we demonstrate that the ellipsoidal variations (ii. can be large enough to be distinguished from the remaining components and that this effect can be used to constrain the planet's mass. As an example we presend KOI-13b (candidate exoplanet system included in the September 2011 Kepler data release. The Kepler light curve shows both primary and secondary eclipses, as well as significant out-of-eclipse light curve variations. We model the relative contributions from (i thermal emission from the companion, (ii planetary reflected light, (iii doppler beaming, and (iv ellipsoidal variations in the host-star arising from the tidal distortion of the host star by its companion. Our analysis, based on the light curve alone, enables us to constrain the mass of the KOI-13.01 companion to be MC = 8.3 ± 1.25 MJ and thus demonstrates that the transiting companion is a planet. The teqnique is useful for current and future space missions such as Kepler and PLATO.

  7. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants

    Directory of Open Access Journals (Sweden)

    Malik Yousef

    2016-01-01

    Full Text Available MicroRNAs (miRNAs are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection.

  8. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    Science.gov (United States)

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  9. Pre-Processing Effect on the Accuracy of Event-Based Activity Segmentation and Classification through Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Benish Fida

    2015-09-01

    Full Text Available Inertial sensors are increasingly being used to recognize and classify physical activities in a variety of applications. For monitoring and fitness applications, it is crucial to develop methods able to segment each activity cycle, e.g., a gait cycle, so that the successive classification step may be more accurate. To increase detection accuracy, pre-processing is often used, with a concurrent increase in computational cost. In this paper, the effect of pre-processing operations on the detection and classification of locomotion activities was investigated, to check whether the presence of pre-processing significantly contributes to an increase in accuracy. The pre-processing stages evaluated in this study were inclination correction and de-noising. Level walking, step ascending, descending and running were monitored by using a shank-mounted inertial sensor. Raw and filtered segments, obtained from a modified version of a rule-based gait detection algorithm optimized for sequential processing, were processed to extract time and frequency-based features for physical activity classification through a support vector machine classifier. The proposed method accurately detected >99% gait cycles from raw data and produced >98% accuracy on these segmented gait cycles. Pre-processing did not substantially increase classification accuracy, thus highlighting the possibility of reducing the amount of pre-processing for real-time applications.

  10. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    Science.gov (United States)

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  11. Measurement Properties and Classification Accuracy of Two Spanish Parent Surveys of Language Development for Preschool-Age Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2010-01-01

    Purpose: To describe the concurrent validity and classification accuracy of 2 Spanish parent surveys of language development, the Spanish Ages and Stages Questionnaire (ASQ; Squires, Potter, & Bricker, 1999) and the Pilot Inventario-III (Pilot INV-III; Guiberson, 2008a). Method: Forty-eight Spanish-speaking parents of preschool-age children…

  12. Measurement Properties and Classification Accuracy of Two Spanish Parent Surveys of Language Development for Preschool-Age Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2010-01-01

    Purpose: To describe the concurrent validity and classification accuracy of 2 Spanish parent surveys of language development, the Spanish Ages and Stages Questionnaire (ASQ; Squires, Potter, & Bricker, 1999) and the Pilot Inventario-III (Pilot INV-III; Guiberson, 2008a). Method: Forty-eight Spanish-speaking parents of preschool-age children…

  13. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota

    Science.gov (United States)

    Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.

    2013-01-01

    Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.

  14. Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data

    Science.gov (United States)

    Elhag, Mohamed; Boteva, Silvena

    2016-10-01

    Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.

  15. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    Science.gov (United States)

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  16. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  17. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    Science.gov (United States)

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  18. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  19. Diagnostic performance of whole brain volume perfusion CT in intra-axial brain tumors: Preoperative classification accuracy and histopathologic correlation

    Energy Technology Data Exchange (ETDEWEB)

    Xyda, Argyro, E-mail: argyro.xyda@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany); Department of Radialogy, University Hospital of Heraklion, Voutes, 71110 Heraklion, Crete (Greece); Haberland, Ulrike, E-mail: ulrike.haberland@siemens.com [Siemens AG Healthcare Sector, Computed Tomography, Siemensstr. 1, 91301 Forchheim (Germany); Klotz, Ernst, E-mail: ernst.klotz@siemens.com [Siemens AG Healthcare Sector, Computed Tomography, Siemensstr. 1, 91301 Forchheim (Germany); Jung, Klaus, E-mail: kjung1@uni-goettingen.de [Department of Medical Statistics, Georg-August University, Humboldtallee 32, 37073 Goettingen (Germany); Bock, Hans Christoph, E-mail: cbock@gmx.de [Department of Neurosurgery, Johannes Gutenberg University Hospital of Mainz, Langenbeckstraße 1, 55101 Mainz (Germany); Schramm, Ramona, E-mail: ramona.schramm@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany); Knauth, Michael, E-mail: michael.knauth@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany); Schramm, Peter, E-mail: p.schramm@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany)

    2012-12-15

    Background: To evaluate the preoperative diagnostic power and classification accuracy of perfusion parameters derived from whole brain volume perfusion CT (VPCT) in patients with cerebral tumors. Methods: Sixty-three patients (31 male, 32 female; mean age 55.6 ± 13.9 years), with MRI findings suspected of cerebral lesions, underwent VPCT. Two readers independently evaluated VPCT data. Volumes of interest (VOIs) were marked circumscript around the tumor according to maximum intensity projection volumes, and then mapped automatically onto the cerebral blood volume (CBV), flow (CBF) and permeability Ktrans perfusion datasets. A second VOI was placed in the contra lateral cortex, as control. Correlations among perfusion values, tumor grade, cerebral hemisphere and VOIs were evaluated. Moreover, the diagnostic power of VPCT parameters, by means of positive and negative predictive value, was analyzed. Results: Our cohort included 32 high-grade gliomas WHO III/IV, 18 low-grade I/II, 6 primary cerebral lymphomas, 4 metastases and 3 tumor-like lesions. Ktrans demonstrated the highest sensitivity, specificity and positive predictive value, with a cut-off point of 2.21 mL/100 mL/min, for both the comparisons between high-grade versus low-grade and low-grade versus primary cerebral lymphomas. However, for the differentiation between high-grade and primary cerebral lymphomas, CBF and CBV proved to have 100% specificity and 100% positive predictive value, identifying preoperatively all the histopathologically proven high-grade gliomas. Conclusion: Volumetric perfusion data enable the hemodynamic assessment of the entire tumor extent and provide a method of preoperative differentiation among intra-axial cerebral tumors with promising diagnostic accuracy.

  20. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  1. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  2. Bayesian Image Classification At High Latitudes

    Science.gov (United States)

    Bulgin, Claire E.; Eastwood, Steinar; Merchant, Chris J.

    2013-12-01

    The European Space Agency created the Climate Change Initiative (CCI) to maximize the usefulness of Earth Observations to climate science. Sea Surface Temperature (SST) is an essential climate variable to which satellite observations make a crucial contribution, and is one of the projects within the CCI program. SST retrieval is dependent on successful cloud clearing and identification of clear-sky pixels over ocean. At high latitudes image classification is more difficult due to the presence of sea-ice. Newly formed ice has a temperature close to the freezing point of water and a dark surface making it difficult to distinguish from open ocean using data at visible and infrared wavelengths. Similarly, melt ponds on the sea-ice surface make image classification more difficult. We present here a three- way Bayesian classifier for the AATSR instrument classifying pixels as ‘clear-sky over ocean', ‘clear-sky over ice' or ‘cloud' using the 0.6, 1.6, 11 and 12 micron channels. We demonstrate the ability of the classifier to successfully identify sea-ice and consider the potential for generating an ice surface temperature record from AATSR which could be extended using data from SLSTR.

  3. Image Reconstruction Using Multi Layer Perceptron MLP And Support Vector Machine SVM Classifier And Study Of Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Shovasis Kumar Biswas

    2015-02-01

    Full Text Available Abstract Support Vector Machine SVM and back-propagation neural network BPNN has been applied successfully in many areas for example rule extraction classification and evaluation. In this paper we studied the back-propagation algorithm for training the multilayer artificial neural network and a support vector machine for data classification and image reconstruction aspects. A model focused on SVM with Gaussian RBF kernel is utilized here for data classification. Back propagation neural network is viewed as one of the most straightforward and is most general methods used for supervised training of multilayered neural network. We compared a support vector machine SVM with a back-propagation neural network BPNN for the task of data classification and image reconstruction. We made a comparison between the performances of the multi-class classification of these two learning methods. Comparing with these two methods we can conclude that the classification accuracy of the support vector machine is better and algorithm is much faster than the MLP with back propagation algorithm.

  4. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  5. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  6. Land cover classification accuracy from electro-optical, X, C, and L-band Synthetic Aperture Radar data fusion

    Science.gov (United States)

    Hammann, Mark Gregory

    The fusion of electro-optical (EO) multi-spectral satellite imagery with Synthetic Aperture Radar (SAR) data was explored with the working hypothesis that the addition of multi-band SAR will increase the land-cover (LC) classification accuracy compared to EO alone. Three satellite sources for SAR imagery were used: X-band from TerraSAR-X, C-band from RADARSAT-2, and L-band from PALSAR. Images from the RapidEye satellites were the source of the EO imagery. Imagery from the GeoEye-1 and WorldView-2 satellites aided the selection of ground truth. Three study areas were chosen: Wad Medani, Sudan; Campinas, Brazil; and Fresno- Kings Counties, USA. EO imagery were radiometrically calibrated, atmospherically compensated, orthorectifed, co-registered, and clipped to a common area of interest (AOI). SAR imagery were radiometrically calibrated, and geometrically corrected for terrain and incidence angle by converting to ground range and Sigma Naught (?0). The original SAR HH data were included in the fused image stack after despeckling with a 3x3 Enhanced Lee filter. The variance and Gray-Level-Co-occurrence Matrix (GLCM) texture measures of contrast, entropy, and correlation were derived from the non-despeckled SAR HH bands. Data fusion was done with layer stacking and all data were resampled to a common spatial resolution. The Support Vector Machine (SVM) decision rule was used for the supervised classifications. Similar LC classes were identified and tested for each study area. For Wad Medani, nine classes were tested: low and medium intensity urban, sparse forest, water, barren ground, and four agriculture classes (fallow, bare agricultural ground, green crops, and orchards). For Campinas, Brazil, five generic classes were tested: urban, agriculture, forest, water, and barren ground. For the Fresno-Kings Counties location 11 classes were studied: three generic classes (urban, water, barren land), and eight specific crops. In all cases the addition of SAR to EO resulted

  7. Classification algorithms to improve the accuracy of identifying patients hospitalized with community-acquired pneumonia using administrative data.

    Science.gov (United States)

    Yu, O; Nelson, J C; Bounds, L; Jackson, L A

    2011-09-01

    In epidemiological studies of community-acquired pneumonia (CAP) that utilize administrative data, cases are typically defined by the presence of a pneumonia hospital discharge diagnosis code. However, not all such hospitalizations represent true CAP cases. We identified 3991 hospitalizations during 1997-2005 in a managed care organization, and validated them as CAP or not by reviewing medical records. To improve the accuracy of CAP identification, classification algorithms that incorporated additional administrative information associated with the hospitalization were developed using the classification and regression tree analysis. We found that a pneumonia code designated as the primary discharge diagnosis and duration of hospital stay improved the classification of CAP hospitalizations. Compared to the commonly used method that is based on the presence of a primary discharge diagnosis code of pneumonia alone, these algorithms had higher sensitivity (81-98%) and positive predictive values (82-84%) with only modest decreases in specificity (48-82%) and negative predictive values (75-90%).

  8. A classification of bioinformatics algorithms from the viewpoint of maximizing expected accuracy (MEA).

    Science.gov (United States)

    Hamada, Michiaki; Asai, Kiyoshi

    2012-05-01

    Many estimation problems in bioinformatics are formulated as point estimation problems in a high-dimensional discrete space. In general, it is difficult to design reliable estimators for this type of problem, because the number of possible solutions is immense, which leads to an extremely low probability for every solution-even for the one with the highest probability. Therefore, maximum score and maximum likelihood estimators do not work well in this situation although they are widely employed in a number of applications. Maximizing expected accuracy (MEA) estimation, in which accuracy measures of the target problem and the entire distribution of solutions are considered, is a more successful approach. In this review, we provide an extensive discussion of algorithms and software based on MEA. We describe how a number of algorithms used in previous studies can be classified from the viewpoint of MEA. We believe that this review will be useful not only for users wishing to utilize software to solve the estimation problems appearing in this article, but also for developers wishing to design algorithms on the basis of MEA.

  9. Classification accuracy analysis of selected land use and land cover products in a portion of West-Central Lower Michigan

    Science.gov (United States)

    Ma, Kin Man

    2007-12-01

    Remote sensing satellites have been utilized to characterize and map land cover and its changes since the 1970s. However, uncertainties exist in almost all land use and land cover maps classified from remotely sensed images. In particular, it has been recognized that the spatial mis-registration of land cover maps can affect the true estimates of land use/land cover (LULC) changes. This dissertation addressed the following questions: what are the spatial patterns, magnitudes, and cover-dependencies of classification uncertainty associated with West-Central Lower Michigan's LULC products and how can the adverse effects of spatial misregistration on accuracy assessment be reduced? Two Michigan LULC products were chosen for comparison: 1998 Muskegon River Watershed (MRW) Michigan Resource Information Systems LULC map and a 2001 Integrated Forest Monitoring and Assessment Prescription Project (IFMAP). The 1m resolution 1998 MRW LULC map was derived from U.S. Geological Survey Digital Orthophoto Quarter Quadrangle (USGS DOQQs) color infrared imagery and was used as the reference map, since it has a thematic accuracy of 95%. The IFMAP LULC map was co-registered to a series of selected 1998 USGS DOQQs. The total combined root mean square error (rmse) distance of the georectified 2001 IFMAP was +/-12.20m. A spatial uncertainty buffer of at least 1.5 times the rmse was set at 20m so that polygon core areas would be unaffected by spatial misregistration noise. A new spatial misregistration buffer protocol (SPATIALM_ BUFFER) was developed to limit the effect of spatial misregistration on classification accuracy assessment. Spatial uncertainty buffer zones of 20m were generated around LULC polygons of both datasets. Eight-hundred seventeen (817) stratified random accuracy assessment points (AAPs) were generated across the 1998 MRW map. Classification accuracy and kappa statistics were generated for both the 817 AAPs and 604 AAPs comparisons. For the 817 AAPs comparison, the

  10. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  11. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  12. Influence of spatial temperature distribution on high accuracy interferometric metrology

    Science.gov (United States)

    Gu, Yongqiang; Miao, Erlong; Yan, Feng; Zhang, Jian; Yang, Huaijiang

    2010-10-01

    We calculate the influence of temperature change on the refractive index of air, establish a model of air temperature distribution and analyze the effect of different temperature distribution on the high accuracy interferometric metrology. First, a revised Edlen formula is employed to acquire the relation between temperature and refractive index of air, followed by introducing the fixed temperature gradient distribution among the spatial grid within the optical cavity between the reference flat and the test flat of the Fizeau interferometer, accompanied by a temperature change random function within each grid. Finally, all the rays through the air layer with different incident angles are traced by Matlab program in order to obtain the final output position, angle and OPD for each ray. The influence of different temperature distribution and the length of the optical cavity in on the testing accuracy can be analyzed through the RMS value that results from repeatable rays tracing. As a result, the horizontal distribution (vertical to optical axis) has a large effect on the testing accuracy. Thus, to realize the high accuracy figure metrology, the horizontal distribution of temperature must be rigorously controlled as well as to shorten the length of the optical cavity to a large extent. The results from our simulation are of great significant for the accuracy analysis of interferometric testing and the research of manufacturing a interferometer.

  13. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1998-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with realtime information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate flight

  14. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1999-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with real-time information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate lligh

  15. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1999-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with real-time information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate lligh

  16. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1998-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with realtime information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate flight

  17. DIPSY, a low-cost GPS application with high accuracy

    NARCIS (Netherlands)

    Heijden, W.F.M. van der

    1999-01-01

    To improve the control of unmanned aircraft flying out of visual range, the controller needs to be provided with real-time information about the position and behaviour of the drone during the flight. The position of the drone has to be presented with a relative high accuracy to obtain accurate

  18. Accuracy of automated classification of major depressive disorder as a function of symptom severity

    Directory of Open Access Journals (Sweden)

    Rajamannar Ramasubbu, MD, FRCPC, MSc

    2016-01-01

    Conclusions: Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  19. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  20. APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN CLASSIFICATION OF HIGH RESOLUTION AGRICULTURAL REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available With the rapid development of Precision Agriculture (PA promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN. For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  1. Application of Convolutional Neural Network in Classification of High Resolution Agricultural Remote Sensing Images

    Science.gov (United States)

    Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.

    2017-09-01

    With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  2. Computerized assessment of pedophilic sexual interest through self-report and viewing time: reliability, validity, and classification accuracy of the affinity program.

    Science.gov (United States)

    Mokros, Andreas; Gebhard, Michael; Heinz, Volker; Marschall, Roland W; Nitschke, Joachim; Glasgow, David V; Gress, Carmen L Z; Laws, D Richard

    2013-06-01

    Affinity is a computerized assessment tool that combines viewing time and self-report measures of sexual interest. The present study was designed to assess the diagnostic properties of Affinity with respect to sexual interest in prepubescent children. Reliability of both self-report and viewing time components was estimated to be high. The group profile of a sample of pedophilic adult male child molesters (n = 42, all of whom admitted their offenses) differed from the group profiles of male community controls (n = 95) and male nonsexual offenders (n = 27), respectively. More specifically, both ratings and viewing times for images showing small children or prejuvenile children were significantly higher within the child molester sample than in either of the other two groups, attesting to the validity of the measures. Overall classification accuracy, however, was mediocre: A multivariate classification routine yielded 50% sensitivity for child molester status at the cost of 13% false positives. The implications for forensic use of Affinity are discussed.

  3. Compensation of motion error in a high accuracy AFM

    Science.gov (United States)

    Cui, Yuguo; Arai, Yoshikazu; He, Gaofa; Asai, Takemi; Gao, Wei

    2008-10-01

    An atomic force microscope (AFM) system is used for large-area measurement with a spiral scanning strategy, which is composed of an air slide, an air spindle and a probe unit. The motion error which is brought from the air slide and the air spindle will increase with the increasing of the measurement area. Then the measurement accuracy will decrease. In order to achieve a high speed and high accuracy measurement, the probe scans along X-direction with constant height mode driven by the air slide, and at the same time, based on the change way of the motion error, it moves along Zdirection conducted by piezoactuator. According to the above method of error compensation, the profile measurement experiment of a micro-structured surface has been carried out. The experimental result shows that this method is effective for eliminating motion error, and it can achieve high speed and precision measurement of micro-structured surface.

  4. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  5. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Bias-corrected diagonal discriminant rules for high-dimensional classification.

    Science.gov (United States)

    Huang, Song; Tong, Tiejun; Zhao, Hongyu

    2010-12-01

    Diagonal discriminant rules have been successfully used for high-dimensional classification problems, but suffer from the serious drawback of biased discriminant scores. In this article, we propose improved diagonal discriminant rules with bias-corrected discriminant scores for high-dimensional classification. We show that the proposed discriminant scores dominate the standard ones under the quadratic loss function. Analytical results on why the bias-corrected rules can potentially improve the predication accuracy are also provided. Finally, we demonstrate the improvement of the proposed rules over the original ones through extensive simulation studies and real case studies.

  7. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    Science.gov (United States)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  8. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    Directory of Open Access Journals (Sweden)

    Michael Korenberg

    2012-08-01

    Full Text Available In both military and civilian applications, the inertial navigation system (INS and the global positioning system (GPS are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  9. Texture classification of vegetation cover in high altitude wetlands zone

    Science.gov (United States)

    Wentao, Zou; Bingfang, Wu; Hongbo, Ju; Hua, Liu

    2014-03-01

    The aim of this study was to investigate the utility of datasets composed of texture measures and other features for the classification of vegetation cover, specifically wetlands. QUEST decision tree classifier was applied to a SPOT-5 image sub-scene covering the typical wetlands area in Three River Sources region in Qinghai province, China. The dataset used for the classification comprised of: (1) spectral data and the components of principal component analysis; (2) texture measures derived from pixel basis; (3) DEM and other ancillary data covering the research area. Image textures is an important characteristic of remote sensing images; it can represent spatial variations with spectral brightness in digital numbers. When the spectral information is not enough to separate the different land covers, the texture information can be used to increase the classification accuracy. The texture measures used in this study were calculated from GLCM (Gray level Co-occurrence Matrix); eight frequently used measures were chosen to conduct the classification procedure. The results showed that variance, mean and entropy calculated by GLCM with a 9*9 size window were effective in distinguishing different vegetation types in wetlands zone. The overall accuracy of this method was 84.19% and the Kappa coefficient was 0.8261. The result indicated that the introduction of texture measures can improve the overall accuracy by 12.05% and the overall kappa coefficient by 0.1407 compared with the result using spectral and ancillary data.

  10. Why is a high accuracy needed in dosimetry. [Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Lanzl, L.H.

    1976-01-01

    Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of ..gamma.. and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control.

  11. Navigation message designing with high accuracy for NAV

    Institute of Scientific and Technical Information of China (English)

    Wang Luxiao; Huang Zhigang; Zhao Yun

    2014-01-01

    Navigation message designing with high accuracy guarantee is the key to efficient navi-gation message distribution in the global navigation satellite system (GNSS). Developing high accu-racy-aware navigation message designing algorithms is an important topic. This paper investigates the high-accuracy navigation message designing problem with the message structure unchanged. The contributions made in this paper include a heuristic that employs the concept of the estimated range deviation (ERD) to improve the existing well-known navigation message on L1 frequency (NAV) of global positioning system (GPS) for good accuracy service; a numerical analysis approximation method (NAAM) to evaluate the range error due to truncation (RET) of different navigation messages; and a basic positioning parameters designing algorithm in the limited space allocation. Based on the predicted ultra-rapid data from the ultra-rapid data from the international GPS service for geodynamic (IGU), ERDs are generated in real time for error correction. Simulations show that the algorithms developed in this paper are general and flexible, and thus are applicable to NAV improvement and other navigation message designs.

  12. Land cover classification based on object-oriented with airborne lidar and high spectral resolution remote sensing image

    Science.gov (United States)

    Li, Fangfang; Liu, Zhengjun; Xu, Qiangqiang; Ren, Haicheng; Zhou, Xingyu; Yuan, Yonghua

    2016-10-01

    In order to improve land cover classification accuracy of the coastal tidal wetland area in Dafeng, this paper take advantage of hyper-spectral remote sensing image with high spatial resolution airborne Lidar data. The introduction of feature extraction, band selection and nDSM models to reduce the dimension of the original image. After segmentation process that combining FNEA segmentation with spectral differences segmentation method, the paper finalize the study area through the establishment of the rule set classification of land cover classification. The results show that the proposed classification for land cover classification accuracy has improved significantly, including housing, shadow, water, vegetation classification of high precision. That is to say that the method can meet the needs of land cover classification of the coastal tidal wetland area in Dafeng. This innovation is the introduction of principal component analysis, and the use of characteristic index, shape and characteristics of various types of data extraction nDSM feature to improve the accuracy and speed of land cover classification.

  13. High Accuracy, Miniature Pressure Sensor for Very High Temperatures Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SiWave proposes to develop a compact, low-cost MEMS-based pressure sensor for very high temperatures and low pressures in hypersonic wind tunnels. Most currently...

  14. Classification of High Spatial Resolution Image Using Multi Circular Local Binary Pattern and Variance

    Directory of Open Access Journals (Sweden)

    D. Chakraborty

    2013-11-01

    Full Text Available High spatial resolution satellite image comprises of textured and non-textured regions. Hence classification of high spatial resolution satellite image either by pixel-based or texture-based classification technique does not yield good results. In this study, the Multi Circular Local Binary Pattern (MCLBP Operator and variance (VAR based algorithms are used together to transform the image for measuring the texture. The transformed image is segmented into textured and non-textured region using a threshold. Subsequently, the original image is extracted into textured and non-textured regions using this segmented image mask. Further, extracted textured region is classified using ISODATA classification algorithm considering MCLBP and VAR values of individual pixel of textured region and extracted non-textured region of the image is classified using ISODATA classification algorithm. In case of non-textured region MCLBP and VAR value of individual pixel is not considered for classification as significant textural variation is not found among different classes. Consequently the classified outputs of non-textured and textured region that are generated independently are merged together to get the final classified image. IKONOS 1m PAN images are classified using the proposed classification algorithm and found that the classification accuracy is more than 84%.

  15. Optimization of post-classification processing of high-resolution satellite image: A case study

    Institute of Scientific and Technical Information of China (English)

    DONG; Rencai; DONG; Jiajia; WU; Gang; DENG; Hongbing

    2006-01-01

    The application of remote sensing monitoring techniques plays a crucial role in evaluating and governing the vast amount of ecological construction projects in China. However, extracting information of ecological engineering target through high-resolution satellite image is arduous due to the unique topography and complicated spatial pattern on the Loess Plateau of China. As a result, enhancing classification accuracy is a huge challenge to high-resolution image processing techniques. Image processing techniques have a definitive effect on image properties and the selection of different parameters may change the final classification accuracy during post-classification processing. The common method of eliminating noise and smoothing image is majority filtering. However, the filter function may modify the original classified image and the final accuracy. The aim of this study is to develop an efficient and accurate post-processing technique for acquiring information of soil and water conservation engineering, on the Loess Plateau of China, using SPOT image with 2.5 rn resolution. We argue that it is vital to optimize satellite image filtering parameters for special areas and purposes, which focus on monitoring ecological construction projects. We want to know how image filtering influences final classified results and which filtering kernel is optimum. The study design used a series of window sizes to filter the original classified image, and then assess the accuracy of each output map and image quality. We measured the relationship between filtering window size and classification accuracy, and optimized the post-processing techniques of SPOT5satellite images. We conclude that (1) smoothing with the majority filter is sensitive to the information accuracy of soil and water conservation engineering, and (2) for SPOT5 2.5 m image, the 5×5 pixel majority filter is most suitable kernel for extracting information of ecological construction sites in the Loess Plateau of

  16. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    Science.gov (United States)

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  17. Optimizing statistical classification accuracy of satellite remotely sensed imagery for supporting fast flood hydrological analysis

    Science.gov (United States)

    Alexakis, Dimitrios; Agapiou, Athos; Hadjimitsis, Diofantos; Retalis, Adrianos

    2012-06-01

    The aim of this study is to improve classification results of multispectral satellite imagery for supporting flood risk assessment analysis in a catchment area in Cyprus. For this purpose, precipitation and ground spectroradiometric data have been collected and analyzed with innovative statistical analysis methods. Samples of regolith and construction material were in situ collected and examined in the spectroscopy laboratory for their spectral response under consecutive different conditions of humidity. Moreover, reflectance values were extracted from the same targets using Landsat TM/ETM+ images, for drought and humid time periods, using archived meteorological data. The comparison of the results showed that spectral responses for all the specimens were less correlated in cases of substantial humidity, both in laboratory and satellite images. These results were validated with the application of different classification algorithms (ISODATA, maximum likelihood, object based, maximum entropy) to satellite images acquired during time period when precipitation phenomena had been recorded.

  18. Absolute Radiometric Calibration of ALS Intensity Data: Effects on Accuracy and Target Classification

    Directory of Open Access Journals (Sweden)

    Anssi Krooks

    2011-11-01

    Full Text Available Radiometric calibration of airborne laser scanning (ALS intensity data aims at retrieving a value related to the target scattering properties, which is independent on the instrument or flight parameters. The aim of a calibration procedure is also to be able to compare results from different flights and instruments, but practical applications are sparsely available, and the performance of calibration methods for this purpose needs to be further assessed. We have studied the radiometric calibration with data from three separate flights and two different instruments using external calibration targets. We find that the intensity data from different flights and instruments can be compared to each other only after a radiometric calibration process using separate calibration targets carefully selected for each flight. The calibration is also necessary for target classification purposes, such as separating vegetation from sand using intensity data from different flights. The classification results are meaningful only for calibrated intensity data.

  19. Classification accuracy of algorithms for blood chemistry data for three aquaculture-affected marine fish species.

    Science.gov (United States)

    Coz-Rakovac, R; Topic Popovic, N; Smuc, T; Strunjak-Perovic, I; Jadan, M

    2009-11-01

    The objective of this study was determination and discrimination of biochemical data among three aquaculture-affected marine fish species (sea bass, Dicentrarchus labrax; sea bream, Sparus aurata L., and mullet, Mugil spp.) based on machine-learning methods. The approach relying on machine-learning methods gives more usable classification solutions and provides better insight into the collected data. So far, these new methods have been applied to the problem of discrimination of blood chemistry data with respect to season and feed of a single species. This is the first time these classification algorithms have been used as a framework for rapid differentiation among three fish species. Among the machine-learning methods used, decision trees provided the clearest model, which correctly classified 210 samples or 85.71%, and incorrectly classified 35 samples or 14.29% and clearly identified three investigated species from their biochemical traits.

  20. Improving ECG classification accuracy using an ensemble of neural network modules.

    Directory of Open Access Journals (Sweden)

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  1. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  2. State of the art in high accuracy high detail DTMs derived from ALS

    Science.gov (United States)

    Pfeifer, N.; Briese, C.; Mandlburger, G.; Höfle, B.; Ressl, C.

    2009-04-01

    High-resolution Digital Terrain Models (DTMs) representing the bare Earth are a fundamental input for various applications in geomorphology. Airborne laser scanning (ALS) is established as a standard tool for deriving DTMs over large areas with unprecedented accuracy. Due to advances in sensor technology and in processing algorithms in the recent years the obtainable accuracy is still increasing. Accuracy is understood as the deviation from the elevation at one specified point to its true value. These advances may lead to a more efficient data acquisition, if reduced accuracy is targeted, but also allow data acquisition schemes with more detail becoming visible, i.e. small features of the relief. For the latter a high internal precision, i.e. repeatability, is necessary. The essential advances in the technologies are improvements in ranging through the introduction of full-waveform (FWF) laser scanning and rigorous models of strip adjustment. In FWF laser scanning the time-dependent strength of the backscattered signal is recorded. This is opposed to the analogue processing of the incoming energy and storage of one arrival time of discrete-return systems. In a simple one-echo situation, the arrival time corresponds to the maximum of the waveform. By applying a decomposition of the full waveform into single echoes, which are transformed copies of the emitted signal, it is possible to retrieve more echoes per shot. Additionally, if echoes of individual scatterers are overlapping, FWF sensors might be able to separate them, whereas discrete return systems might rather only be able to derive one collective arrival time. Finally, the overlay of two echoes does not have the maxima at the same positions as the individual echoes. Additionally, the pulse repetition rate of laser scanners has increased, which allows higher point densities and therefore higher richness of detail. These advances in data acquisition increase the precision within one ALS strip. Deficiencies in

  3. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Science.gov (United States)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  4. High accuracy and visibility-consistent dense multiview stereo.

    Science.gov (United States)

    Vu, Hoang-Hiep; Labatut, Patrick; Pons, Jean-Philippe; Keriven, Renaud

    2012-05-01

    Since the initial comparison of Seitz et al., the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al., showing the results to compare more than favorably with the current state-of-the-art methods.

  5. Novel method for high accuracy figure measurement of optical flat

    Science.gov (United States)

    E, Kewei; Li, Dahai; Yang, Lijie; Guo, Guangrao; Li, Mengyang; Wang, Xuemin; Zhang, Tao; Xiong, Zhao

    2017-01-01

    Phase Measuring Deflectometry (PMD) is a non-contact, high dynamic-range and full-field metrology which becomes a serious competitor to interferometry. However, the accuracy of deflectometry metrology is strongly influenced by the level of the calibrations, including test geometry, imaging pin-hole camera and digital display. In this paper, we propose a novel method that can measure optical flat surface figure to a high accuracy. We first calibrate the camera using a checker pattern shown on a LCD display at six different orientations, and the last orientation is aligned at the same position as the test optical flat. By using this method, lens distortions and the mapping relationship between the CCD pixels and the subaperture coordinates on the test optical flat can be determined at the same time. To further reduce the influence of the calibration errors on measurements, a reference optical flat with a high quality surface is measured, and then the system errors in our PMD setup can be eliminated by subtracting the figure of the reference flat from the figure of the test flat. Although any expensive coordinates measuring machine, such as laser tracker and coordinates measuring machine are not applied in our measurement, our experimental results of optical flat figure from low to high order aberrations still show a good agreement with that from the Fizeau interferometer.

  6. Overview of existing algorithms for emotion classification. Uncertainties in evaluations of accuracies.

    Science.gov (United States)

    Avetisyan, H.; Bruna, O.; Holub, J.

    2016-11-01

    A numerous techniques and algorithms are dedicated to extract emotions from input data. In our investigation it was stated that emotion-detection approaches can be classified into 3 following types: Keyword based / lexical-based, learning based, and hybrid. The most commonly used techniques, such as keyword-spotting method, Support Vector Machines, Naïve Bayes Classifier, Hidden Markov Model and hybrid algorithms, have impressive results in this sphere and can reach more than 90% determining accuracy.

  7. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests.

    Science.gov (United States)

    Maroco, João; Silva, Dina; Rodrigues, Ana; Guerreiro, Manuela; Santana, Isabel; de Mendonça, Alexandre

    2011-08-17

    Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Press' Q test showed that all classifiers performed better than chance alone (p Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a

  8. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    Directory of Open Access Journals (Sweden)

    Guizhou Wang

    2013-01-01

    Full Text Available This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine. Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy.

  9. Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2013-11-01

    Full Text Available The aim of this research is to present a detailed step-by-step method for classification of very high resolution urban satellite images (VHRSI into specific classes such as road, building, vegetation, etc., using fuzzy logic. In this study, object-based image analysis is used for image classification. The main problems in high resolution image classification are the uncertainties in the position of object borders in satellite images and also multiplex resemblance of the segments to different classes. In order to solve this problem, fuzzy logic is used for image classification, since it provides the possibility of image analysis using multiple parameters without requiring inclusion of certain thresholds in the class assignment process. In this study, an inclusive semi-automatic method for image classification is offered, which presents the configuration of the related fuzzy functions as well as fuzzy rules. The produced results are compared to the results of a normal classification using the same parameters, but with crisp rules. The overall accuracies and kappa coefficients of the presented method stand higher than the check projects.

  10. High Accuracy Near-infrared Imaging Polarimetry with NICMOS

    CERN Document Server

    Batcheldor, D; Hines, D C; Schmidt, G D; Axon, D J; Robinson, A; Sparks, W; Tadhunter, C

    2008-01-01

    The findings of a nine orbit calibration plan carried out during HST Cycle 15, to fully determine the NICMOS camera 2 (2.0 micron) polarization calibration to high accuracy, are reported. Recently Ueta et al. and Batcheldor et al. have suggested that NICMOS possesses a residual instrumental polarization at a level of 1.2-1.5%. This would completely inhibit the data reduction in a number of GO programs, and hamper the ability of the instrument to perform high accuracy polarimetry. We obtained polarimetric calibration observations of three polarimetric standards at three spacecraft roll angles separated by ~60deg. Combined with archival data, these observations were used to characterize the residual instrumental polarization in order for NICMOS to reach its full potential of accurate imaging polarimetry at p~1%. Using these data, we place an 0.6% upper limit on the instrumental polarization and calculate values of the parallel transmission coefficients that reproduce the ground-based results for the polarimetri...

  11. High-accuracy mass spectrometry for fundamental studies.

    Science.gov (United States)

    Kluge, H-Jürgen

    2010-01-01

    Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.

  12. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    Science.gov (United States)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients

  13. Researching the technology of high-accuracy camshaft measurement

    Science.gov (United States)

    Chen, Wei; Chen, Yong-Le; Wang, Hong; Liao, Hai-Yang

    1996-10-01

    This paper states the cam's data processing algorithm in detail in high accurate camshaft measurement system. It contains: 1) using minimum error of curve symmetry to seek the center position of the key slot; 2) Calculating the minimum error by cam's curve in theory to search top area; 3) According to cam's tolerance E(i) function and minimum angle error at cam top, seeking the best position of cam top and getting the best angle value and error curve. The algorithm is suitable for measuring all kinds of symmetry or asymmetry cam, and plain push-rod or spherical push-rod cam, for example, bus camshaft, car camshaft, motor camshaft, etc. Using the algorithm, high accuracy measurement can be achieved.

  14. Read-only high accuracy volume holographic optical correlator

    Science.gov (United States)

    Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.

  15. Spatial augmented reality based high accuracy human face projection

    Science.gov (United States)

    Li, Dong; Xie, Jinghui; Li, Yufeng; Weng, Dongdong; Liu, Yue

    2015-08-01

    This paper discusses the imaging principles and the technical difficulties of spatial augmented reality based human face projection. A novel geometry correction method is proposed to realize fast, high-accuracy face model projection. Using a depth camera to reconstruct the projected object, the relative position from the rendered model to the projector can be accessed and the initial projection image is generated. Then the projected image is distorted by using Bezier interpolation to guarantee that the projected texture matches with the object surface. The proposed method is under a simple process flow and can achieve high perception registration of virtual and real object. In addition, this method has a good performance in the condition that the reconstructed model is not exactly same with the rendered virtual model which extends its application area in the spatial augmented reality based human face projection.

  16. Hazard classification assessment for the High Voltage Initiator

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, J.D.

    1994-04-19

    An investigation was conducted to determine whether the High Voltage Initiator (Sandia p number 395710; Navy NAVSEA No. 6237177) could be assigned a Department of Transportation (DOT) hazard classification of ``IGNITERS, 1.4G, UN0325`` under Code of Federal Regulations, 49 CFR 173.101, when packaged per Mound drawing NXB911442. A hazard classification test was performed, and the test data led to a recommended hazard classification of ``IGNITERS, 1.4G, UN0325,`` based on guidance outlined in DOE Order 1540.2 and 49 CFR 173.56.

  17. A Generalized Image Scene Decomposition-Based System for Supervised Classification of Very High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZhiYong Lv

    2016-09-01

    Full Text Available Very high resolution (VHR remote sensing images are widely used for land cover classification. However, to the best of our knowledge, few approaches have been shown to improve classification accuracies through image scene decomposition. In this paper, a simple yet powerful observational scene scale decomposition (OSSD-based system is proposed for the classification of VHR images. Different from the traditional methods, the OSSD-based system aims to improve the classification performance by decomposing the complexity of an image’s content. First, an image scene is divided into sub-image blocks through segmentation to decompose the image content. Subsequently, each sub-image block is classified respectively, or each block is processed firstly through an image filter or spectral–spatial feature extraction method, and then each processed segment is taken as the feature input of a classifier. Finally, classified sub-maps are fused together for accuracy evaluation. The effectiveness of our proposed approach was investigated through experiments performed on different images with different supervised classifiers, namely, support vector machine, k-nearest neighbor, naive Bayes classifier, and maximum likelihood classifier. Compared with the accuracy achieved without OSSD processing, the accuracy of each classifier improved significantly, and our proposed approach shows outstanding performance in terms of classification accuracy.

  18. Basic visual dysfunction allows classification of patients with schizophrenia with exceptional accuracy.

    Science.gov (United States)

    González-Hernández, J A; Pita-Alcorta, C; Padrón, A; Finalé, A; Galán, L; Martínez, E; Díaz-Comas, L; Samper-González, J A; Lencer, R; Marot, M

    2014-10-01

    Basic visual dysfunctions are commonly reported in schizophrenia; however their value as diagnostic tools remains uncertain. This study reports a novel electrophysiological approach using checkerboard visual evoked potentials (VEP). Sources of spectral resolution VEP-components C1, P1 and N1 were estimated by LORETA, and the band-effects (BSE) on these estimated sources were explored in each subject. BSEs were Z-transformed for each component and relationships with clinical variables were assessed. Clinical effects were evaluated by ROC-curves and predictive values. Forty-eight patients with schizophrenia (SZ) and 55 healthy controls participated in the study. For each of the 48 patients, the three VEP components were localized to both dorsal and ventral brain areas and also deviated from a normal distribution. P1 and N1 deviations were independent of treatment, illness chronicity or gender. Results from LORETA also suggest that deficits in thalamus, posterior cingulum, precuneus, superior parietal and medial occipitotemporal areas were associated with symptom severity. While positive symptoms were more strongly related to sensory processing deficits (P1), negative symptoms were more strongly related to perceptual processing dysfunction (N1). Clinical validation revealed positive and negative predictive values for correctly classifying SZ of 100% and 77%, respectively. Classification in an additional independent sample of 30 SZ corroborated these results. In summary, this novel approach revealed basic visual dysfunctions in all patients with schizophrenia, suggesting these visual dysfunctions represent a promising candidate as a biomarker for schizophrenia.

  19. Verdict Accuracy of Quick Reduct Algorithm using Clustering and Classification Techniques for Gene Expression Data

    Directory of Open Access Journals (Sweden)

    T.Chandrasekhar

    2012-01-01

    Full Text Available In most gene expression data, the number of training samples is very small compared to the large number of genes involved in the experiments. However, among the large amount of genes, only a small fraction is effective for performing a certain task. Furthermore, a small subset of genes is desirable in developing gene expression based diagnostic tools for delivering reliable and understandable results. With the gene selection results, the cost of biological experiment and decision can be greatly reduced by analyzing only the marker genes. An important application of gene expression data in functional genomics is to classify samples according to their gene expression profiles. Feature selection (FS is a process which attempts to select more informative features. It is one of the important steps in knowledge discovery. Conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. This paper studies a feature selection method based on rough set theory. Further K-Means, Fuzzy C-Means (FCM algorithm have implemented for the reduced feature set without considering class labels. Then the obtained results are compared with the original class labels. Back Propagation Network (BPN has also been used for classification. Then the performance of K-Means, FCM, and BPN are analyzed through the confusion matrix. It is found that the BPN is performing well comparatively.

  20. The Fisher Kernel Coding Framework for High Spatial Resolution Scene Classification

    Directory of Open Access Journals (Sweden)

    Bei Zhao

    2016-02-01

    Full Text Available High spatial resolution (HSR image scene classification is aimed at bridging the semantic gap between low-level features and high-level semantic concepts, which is a challenging task due to the complex distribution of ground objects in HSR images. Scene classification based on the bag-of-visual-words (BOVW model is one of the most successful ways to acquire the high-level semantic concepts. However, the BOVW model assigns local low-level features to their closest visual words in the “visual vocabulary” (the codebook obtained by k-means clustering, which discards too many useful details of the low-level features in HSR images. In this paper, a feature coding method under the Fisher kernel (FK coding framework is introduced to extend the BOVW model by characterizing the low-level features with a gradient vector instead of the count statistics in the BOVW model, which results in a significant decrease in the codebook size and an acceleration of the codebook learning process. By considering the differences in the distributions of the ground objects in different regions of the images, local FK (LFK is proposed for the HSR image scene classification method. The experimental results show that the proposed scene classification methods under the FK coding framework can greatly reduce the computational cost, and can obtain a better scene classification accuracy than the methods based on the traditional BOVW model.

  1. Impact of the accuracy of automatic segmentation of cell nuclei clusters on classification of thyroid follicular lesions.

    Science.gov (United States)

    Jung, Chanho; Kim, Changick

    2014-08-01

    Automatic segmentation of cell nuclei clusters is a key building block in systems for quantitative analysis of microscopy cell images. For that reason, it has received a great attention over the last decade, and diverse automatic approaches to segment clustered nuclei with varying levels of performance under different test conditions have been proposed in literature. To the best of our knowledge, however, so far there is no comparative study on the methods. This study is a first attempt to fill this research gap. More precisely, the purpose of this study is to present an objective performance comparison of existing state-of-the-art segmentation methods. Particularly, the impact of their accuracy on classification of thyroid follicular lesions is also investigated "quantitatively" under the same experimental condition, to evaluate the applicability of the methods. Thirteen different segmentation approaches are compared in terms of not only errors in nuclei segmentation and delineation, but also their impact on the performance of system to classify thyroid follicular lesions using different metrics (e.g., diagnostic accuracy, sensitivity, specificity, etc.). Extensive experiments have been conducted on a total of 204 digitized thyroid biopsy specimens. Our study demonstrates that significant diagnostic errors can be avoided using more advanced segmentation approaches. We believe that this comprehensive comparative study serves as a reference point and guide for developers and practitioners in choosing an appropriate automatic segmentation technique adopted for building automated systems for specifically classifying follicular thyroid lesions. © 2014 International Society for Advancement of Cytometry.

  2. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  3. Computer-aided diagnosis of breast MRI with high accuracy optical flow estimation

    Science.gov (United States)

    Meyer-Baese, Anke; Barbu, Adrian; Lobbes, Marc; Hoffmann, Sebastian; Burgeth, Bernhard; Kleefeld, Andreas; Meyer-Bäse, Uwe

    2015-05-01

    Non-mass enhancing lesions represent a challenge for the radiological reading. They are not well-defined in both morphology (geometric shape) and kinetics (temporal enhancement) and pose a problem to lesion detection and classification. To enhance the discriminative properties of an automated radiological workflow, the correct preprocessing steps need to be taken. In an usual computer-aided diagnosis (CAD) system, motion compensation plays an important role. To this end, we employ a new high accuracy optical flow based motion compensation algorithm with robustification variants. An automated computer-aided diagnosis system evaluates the atypical behavior of these lesions, and additionally considers the impact of non-rigid motion compensation on a correct diagnosis.

  4. Benchmarking Deep Learning Frameworks for the Classification of Very High Resolution Satellite Multispectral Data

    Science.gov (United States)

    Papadomanolaki, M.; Vakalopoulou, M.; Zagoruyko, S.; Karantzalos, K.

    2016-06-01

    In this paper we evaluated deep-learning frameworks based on Convolutional Neural Networks for the accurate classification of multispectral remote sensing data. Certain state-of-the-art models have been tested on the publicly available SAT-4 and SAT-6 high resolution satellite multispectral datasets. In particular, the performed benchmark included the AlexNet, AlexNet-small and VGG models which had been trained and applied to both datasets exploiting all the available spectral information. Deep Belief Networks, Autoencoders and other semi-supervised frameworks have been, also, compared. The high level features that were calculated from the tested models managed to classify the different land cover classes with significantly high accuracy rates i.e., above 99.9%. The experimental results demonstrate the great potentials of advanced deep-learning frameworks for the supervised classification of high resolution multispectral remote sensing data.

  5. Monitoring techniques for high accuracy interference fit assembly processes

    Science.gov (United States)

    Liuti, A.; Vedugo, F. Rodriguez; Paone, N.; Ungaro, C.

    2016-06-01

    In the automotive industry, there are many assembly processes that require a high geometric accuracy, in the micrometer range; generally open-loop controllers cannot meet these requirements. This results in an increased defect rate and high production costs. This paper presents an experimental study of interference fit process, aimed to evaluate the aspects which have the most impact on the uncertainty in the final positioning. The press-fitting process considered, consists in a press machine operating with a piezoelectric actuator to press a plug into a sleeve. Plug and sleeve are designed and machined to obtain a known interference fit. Differential displacement and velocity measurements of the plug with respect to the sleeve are measured by a fiber optic differential laser Doppler vibrometer. Different driving signals of the piezo actuator allow to have an insight into the differences between a linear and a pulsating press action. The paper highlights how the press-fit assembly process is characterized by two main phases: the first is an elastic deformation of the plug and sleeve, which produces a reversible displacement, the second is a sliding of the plug with respect to the sleeve, which results in an irreversible displacement and finally realizes the assembly. The simultaneous measurements of the displacement and the force have permitted to define characteristic features in the signal useful to identify the start of the irreversible movement. These indicators could be used to develop a control logic in a press assembly process.

  6. A high-accuracy DCO with hybrid architecture

    Science.gov (United States)

    Sun, Yapeng; Zhao, Huidong; Qiao, Shushan; Hei, Yong; Zhang, Fuhai

    2017-07-01

    In this paper, a novel hybrid digital-controlled oscillator (DCO) is proposed, which is used to improve the accuracy of the all-digital clock generator without reference source. The DCO with hybrid architecture consists of two parts: DCO_high and DCO_low. The DCO_high decides the coarse output frequency of DCO, and adopts the cascade structure to decrease the area. The DCO_low adopts the chain structure with three-state buffer, and decides the fine output frequency of DCO. Compared with traditional cascade DCO, the proposed hybrid DCO features higher precision with less inherent delay. Therefore the clock generator can tolerate process, voltage and temperature (PVT) variation and meet the needs of different conditions. The DCO is designed in SMIC 180 nm CMOS process with 0.021 mm2 chip area. The output frequency is adjusted from 15-120 MHz. The frequency error is less than 0.83% at 25 MHz with 1.6-1.8 V supply voltage and 0-80 °C temperature variations in TT, FF, SS corners. Project supported by the National Natural Science Foundation of China (Nos. 61306025, 61474135).

  7. High Accuracy Human Activity Monitoring using Neural network

    CERN Document Server

    Sharma, Annapurna; Chung, Wan-Young

    2011-01-01

    This paper presents the designing of a neural network for the classification of Human activity. A Triaxial accelerometer sensor, housed in a chest worn sensor unit, has been used for capturing the acceleration of the movements associated. All the three axis acceleration data were collected at a base station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant), processed and classified using MATLAB. A neural network approach for classification was used with an eye on theoretical and empirical facts. The work shows a detailed description of the designing steps for the classification of human body acceleration data. A 4-layer back propagation neural network, with Levenberg-marquardt algorithm for training, showed best performance among the other neural network training algorithms.

  8. Determination of UAV position using high accuracy navigation platform

    Directory of Open Access Journals (Sweden)

    Ireneusz Kubicki

    2016-07-01

    Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration

  9. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  10. Prioritizing spatial accuracy in high-resolution fMRI data using multivariate feature weight mapping

    Directory of Open Access Journals (Sweden)

    Johannes eStelzer

    2014-04-01

    Full Text Available Although ultra-high-field fMRI at field strengths of 7T or above provides substantial gains in BOLD contrast-to-noise ratio, when very high-resolution fMRI is required such gains are inevitably reduced. The improvement in sensitivity provided by multivariate analysis techniques, as compared with univariate methods, then becomes especially welcome. Information mapping approaches are commonly used, such as the searchlight technique, which take into account the spatially distributed patterns of activation in order to predict stimulus conditions. However, the popular searchlight decoding technique, in particular, has been found to be prone to spatial inaccuracies. For instance, the spatial extent of informative areas is generally exaggerated, and their spatial configuration is distorted. We propose the combination of a nonparametric and permutation-based statistical framework with linear classifiers. We term this new combined method Feature Weight Mapping (FWM. The main goal of the proposed method is to map the specific contribution of each voxel to the classification decision while including a correction for the multiple comparisons problem. Next, we compare this new method to the searchlight approach using a simulation and ultra-high-field 7T experimental data. We found that the searchlight method led to spatial inaccuracies that are especially noticeable in high-resolution fMRI data. In contrast, FWM was more spatially precise, revealing both informative anatomical structures as well as the direction by which voxels contribute to the classification. By maximizing the spatial accuracy of ultra-high-field fMRI results, global multivariate methods provide a substantial improvement for characterizing structure-function relationships.

  11. Predictive Utility and Classification Accuracy of Oral Reading Fluency and the Measures of Academic Progress for the Wisconsin Knowledge and Concepts Exam

    Science.gov (United States)

    Ball, Carrie R.; O'Connor, Edward

    2016-01-01

    This study examined the predictive validity and classification accuracy of two commonly used universal screening measures relative to a statewide achievement test. Results indicated that second-grade performance on oral reading fluency and the Measures of Academic Progress (MAP), together with special education status, explained 68% of the…

  12. Automatic classification and pattern discovery in high-throughput protein crystallization trials.

    Science.gov (United States)

    Cumbaa, Christian; Jurisica, Igor

    2005-01-01

    Conceptually, protein crystallization can be divided into two phases search and optimization. Robotic protein crystallization screening can speed up the search phase, and has a potential to increase process quality. Automated image classification helps to increase throughput and consistently generate objective results. Although the classification accuracy can always be improved, our image analysis system can classify images from 1,536-well plates with high classification accuracy (85%) and ROC score (0.87), as evaluated on 127 human-classified protein screens containing 5,600 crystal images and 189,472 non-crystal images. Data mining can integrate results from high-throughput screens with information about crystallizing conditions, intrinsic protein properties, and results from crystallization optimization. We apply association mining, a data mining approach that identifies frequently occurring patterns among variables and their values. This approach segregates proteins into groups based on how they react in a broad range of conditions, and clusters cocktails to reflect their potential to achieve crystallization. These results may lead to crystallization screen optimization, and reveal associations between protein properties and crystallization conditions. We also postulate that past experience may lead us to the identification of initial conditions favorable to crystallization for novel proteins.

  13. Investigation of the trade-off between time window length, classifier update rate and classification accuracy for restorative brain-computer interfaces.

    Science.gov (United States)

    Darvishi, Sam; Ridding, Michael C; Abbott, Derek; Baumert, Mathias

    2013-01-01

    Recently, the application of restorative brain-computer interfaces (BCIs) has received significant interest in many BCI labs. However, there are a number of challenges, that need to be tackled to achieve efficient performance of such systems. For instance, any restorative BCI needs an optimum trade-off between time window length, classification accuracy and classifier update rate. In this study, we have investigated possible solutions to these problems by using a dataset provided by the University of Graz, Austria. We have used a continuous wavelet transform and the Student t-test for feature extraction and a support vector machine (SVM) for classification. We find that improved results, for restorative BCIs for rehabilitation, may be achieved by using a 750 milliseconds time window with an average classification accuracy of 67% that updates every 32 milliseconds.

  14. Combined Scintigraphy and Tumor Marker Analysis Predicts Unfavorable Histopathology of Neuroblastic Tumors with High Accuracy.

    Directory of Open Access Journals (Sweden)

    Wolfgang Peter Fendler

    Full Text Available Our aim was to improve the prediction of unfavorable histopathology (UH in neuroblastic tumors through combined imaging and biochemical parameters.123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR, MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction.34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC. TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001. Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001. Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001.Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH.

  15. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  16. The Effects of Point or Polygon Based Training Data on RandomForest Classification Accuracy of Wetlands

    Directory of Open Access Journals (Sweden)

    Jennifer Corcoran

    2015-04-01

    Full Text Available Wetlands are dynamic in space and time, providing varying ecosystem services. Field reference data for both training and assessment of wetland inventories in the State of Minnesota are typically collected as GPS points over wide geographical areas and at infrequent intervals. This status-quo makes it difficult to keep updated maps of wetlands with adequate accuracy, efficiency, and consistency to monitor change. Furthermore, point reference data may not be representative of the prevailing land cover type for an area, due to point location or heterogeneity within the ecosystem of interest. In this research, we present techniques for training a land cover classification for two study sites in different ecoregions by implementing the RandomForest classifier in three ways: (1 field and photo interpreted points; (2 fixed window surrounding the points; and (3 image objects that intersect the points. Additional assessments are made to identify the key input variables. We conclude that the image object area training method is the most accurate and the most important variables include: compound topographic index, summer season green and blue bands, and grid statistics from LiDAR point cloud data, especially those that relate to the height of the return.

  17. Improved Wetland Classification Using Eight-Band High Resolution Satellite Imagery and a Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Charles R. Lane

    2014-12-01

    Full Text Available Although remote sensing technology has long been used in wetland inventory and monitoring, the accuracy and detail level of wetland maps derived with moderate resolution imagery and traditional techniques have been limited and often unsatisfactory. We explored and evaluated the utility of a newly launched high-resolution, eight-band satellite system (Worldview-2; WV2 for identifying and classifying freshwater deltaic wetland vegetation and aquatic habitats in the Selenga River Delta of Lake Baikal, Russia, using a hybrid approach and a novel application of Indicator Species Analysis (ISA. We achieved an overall classification accuracy of 86.5% (Kappa coefficient: 0.85 for 22 classes of aquatic and wetland habitats and found that additional metrics, such as the Normalized Difference Vegetation Index and image texture, were valuable for improving the overall classification accuracy and particularly for discriminating among certain habitat classes. Our analysis demonstrated that including WV2’s four spectral bands from parts of the spectrum less commonly used in remote sensing analyses, along with the more traditional bandwidths, contributed to the increase in the overall classification accuracy by ~4% overall, but with considerable increases in our ability to discriminate certain communities. The coastal band improved differentiating open water and aquatic (i.e., vegetated habitats, and the yellow, red-edge, and near-infrared 2 bands improved discrimination among different vegetated aquatic and terrestrial habitats. The use of ISA provided statistical rigor in developing associations between spectral classes and field-based data. Our analyses demonstrated the utility of a hybrid approach and the benefit of additional bands and metrics in providing the first spatially explicit mapping of a large and heterogeneous wetland system.

  18. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  19. Object oriented classification of high resolution data for inventory of horticultural crops

    Science.gov (United States)

    Hebbar, R.; Ravishankar, H. M.; Trivedi, S.; Subramoniam, S. R.; Uday, R.; Dadhwal, V. K.

    2014-11-01

    High resolution satellite images are associated with large variance and thus, per pixel classifiers often result in poor accuracy especially in delineation of horticultural crops. In this context, object oriented techniques are powerful and promising methods for classification. In the present study, a semi-automatic object oriented feature extraction model has been used for delineation of horticultural fruit and plantation crops using Erdas Objective Imagine. Multi-resolution data from Resourcesat LISS-IV and Cartosat-1 have been used as source data in the feature extraction model. Spectral and textural information along with NDVI were used as inputs for generation of Spectral Feature Probability (SFP) layers using sample training pixels. The SFP layers were then converted into raster objects using threshold and clump function resulting in pixel probability layer. A set of raster and vector operators was employed in the subsequent steps for generating thematic layer in the vector format. This semi-automatic feature extraction model was employed for classification of major fruit and plantations crops viz., mango, banana, citrus, coffee and coconut grown under different agro-climatic conditions. In general, the classification accuracy of about 75-80 per cent was achieved for these crops using object based classification alone and the same was further improved using minimal visual editing of misclassified areas. A comparison of on-screen visual interpretation with object oriented approach showed good agreement. It was observed that old and mature plantations were classified more accurately while young and recently planted ones (3 years or less) showed poor classification accuracy due to mixed spectral signature, wider spacing and poor stands of plantations. The results indicated the potential use of object oriented approach for classification of high resolution data for delineation of horticultural fruit and plantation crops. The present methodology is applicable at

  20. Key technologies for high-accuracy large mesh antenna reflectors

    Science.gov (United States)

    Meguro, Akira; Harada, Satoshi; Watanabe, Mitsunobu

    2003-12-01

    Nippon Telephone and Telegram Corporation (NTT) continues to develop the modular mesh-type deployable antenna. Antenna diameter can be changed from 5 m to about 20 m by changing the number of modules used with surface accuracy better than 2.4 mm RMS (including all error factors) with sufficient deployment reliability. Key technologies are the antenna's structural design, the deployment mechanism, the design tool, the analysis tool, and modularized testing/evaluation methods. This paper describes our beam steering mechanism. Tests show that it yields a beam pointing accuracy of better than 0.1°. Based on the S-band modular mesh antenna reflector, the surface accuracy degradation factors that must be considered in designing the new antenna are partially identified. The influence of modular connection errors on surface accuracy is quantitatively estimated. Our analysis tool SPADE is extended to include the addition of joint gaps. The addition of gaps allows non-linear vibration characteristics due to gapping in deployment hinges to be calculated. We intend to design a new type of mesh antenna reflector. Our new goal is an antenna for Ku or Ka band satellite communication. For this mission, the surface shape must be 5 times more accurate than is required for an S-band antenna.

  1. CLASSIFICATION OF LIDAR DATA FOR GENERATING A HIGH-PRECISION ROADWAY MAP

    Directory of Open Access Journals (Sweden)

    J. Jeong

    2016-06-01

    Full Text Available Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  2. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    Science.gov (United States)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  3. MARVIN : high speed 3D imaging for seedling classification

    NARCIS (Netherlands)

    Koenderink, N.J.J.P.; Wigham, M.L.I.; Golbach, F.B.T.F.; Otten, G.W.; Gerlich, R.J.H.; Zedde, van de H.J.

    2009-01-01

    The next generation of automated sorting machines for seedlings demands 3D models of the plants to be made at high speed and with high accuracy. In our system the 3D plant model is created based on the information of 24 RGB cameras. Our contribution is an image acquisition technique based on

  4. Combined Saliency with Multi-Convolutional Neural Network for High Resolution Remote Sensing Scene Classification

    Directory of Open Access Journals (Sweden)

    HE Xiaofei

    2016-09-01

    Full Text Available The scene information existing in high resolution remote sensing images is important for image interpretation and understanding of the real world. Traditional scene classification methods often use middle and low-level artificial features, but high resolution images have rich information and complex scene configuration, which need high-level feature to express. A joint saliency and multi-convolutional neural network method is proposed in this paper. Firstly, we obtain meaningful patches that include dominant image information by saliency sampling. Secondly, these patches will be set as a sample input to the convolutional neural network for training, obtain feature expression on different levels. Finally, we embed the multi-layer features into the support vector machine (SVM for image classification. Experiments using two high resolution image scene data show that saliency sampling can effectively get the main target, weaken the impact of other unrelated targets, and reduce data redundancy; convolutional neural network can automatically learn the high-level feature, compared to existing methods, the proposed method can effectively improve the classification accuracy.

  5. High Accuracy and Real-Time Gated Viewing Laser Radar

    Institute of Scientific and Technical Information of China (English)

    Dong Li; Hua-Jun Yang; Shan-Pei Zhou

    2011-01-01

    A gated viewing laser radar has an excellent performance in underwater low light level imaging,and it also provides a viable solution to inhibit backscattering.In this paper,a gated viewing imaging system according to the demand for real-time imaging is presented,and then the simulation is used to analyze the performance of the real-time gated viewing system.The range accuracy performance is limited by the slice number,the width of gate,the delay time step,the initial delay time,as well as the system noise and atmospheric turbulence.The simulation results indicate that the highest range accuracy can be achieved when the system works with the optimal parameters.Finally,how to choose the optimal parameters has been researched.

  6. Individual Urban Tree Species Classification Using Very High Spatial Resolution Airborne Multi-Spectral Imagery Using Longitudinal Profiles

    Directory of Open Access Journals (Sweden)

    Baoxin Hu

    2012-06-01

    Full Text Available Individual tree species identification is important for urban forest inventory and ecology management. Recent advances in remote sensing technologies facilitate more detailed estimation of individual urban tree characteristics. This study presents an approach to improve the classification of individual tree species via longitudinal profiles from very high spatial resolution airborne imagery. The longitudinal profiles represent the side view tree shape, which play a very important role in individual tree species on-site identification. Decision tree classification was employed to conduct the final classification result. Using this profile approach, six major species (Maple, Ash, Birch, Oak, Spruce, Pine of trees on the York University (Ontario, Canada campus were successfully identified. Two decision trees were constructed, one knowledge-based and one derived from gain ratio criteria. The classification accuracy achieved were 84% and 86%, respectively.

  7. Classification of high spatial resolution imagery using optimal Gabor-filters-based texture features

    Science.gov (United States)

    Zhao, Yindi; Wu, Bo

    2007-06-01

    Texture analysis has received great attention in the interpretation of high-resolution satellite images. This paper aims to find optimal filters for discriminating between residential areas and other land cover types in high spatial resolution satellite imagery. Moreover, in order to reduce the blurring border effect, inherent in texture analysis and which introduces important errors in the transition areas between different texture units, a classification procedure is designed for such high spatial resolution satellite images as follows. Firstly, residential areas are detected using Gabor texture features, and two clusters, one a residential area and the other not, are detected using the fuzzy C-Means algorithm, in the frequency space based on Gabor filters. Sequentially, a mask is generated to eliminate residential areas so that other land-cover types would be classified accurately, and not interfered with the spectrally heterogeneous residential areas. Afterwards, other objects are classified using spectral features by the MAP (maximum a posterior) - ICM (iterated conditional mode) classification algorithm designed to enforce the spatial constraints into classification. Experimental results on high spatial resolution remote sensing data confirm that the proposed algorithm provide remarkably better detection accuracy than conventional approaches in terms of both objective measurements and visual evaluation.

  8. Security classification of information concerning high-energy lasers. Instruction

    Energy Technology Data Exchange (ETDEWEB)

    MacCallum, J.

    1981-09-18

    The Instruction reissues Department of Defense (DoD) Instruction 5210.61, April 7, 1977, to update policy and guidance, and establishes uniform criteria for the security classification of information concerning DoD programs and projects involving the research, development, test and evaluation (RDT E), application, production, and operational use of high-energy lasers (HEL), and their application for military purposes, whether as weapons or in other military systems.

  9. High Accuracy Thermal Expansion Measurement at Cryogenic Temperatures

    Science.gov (United States)

    Tucker, Jim; Despit, Gregory; Stallcup, Michael; Presson, Joan; Nein, Max

    2003-01-01

    A new, interferometer-based system for measuring thermal expansion to an absolute accuracy of 20 ppb or better at cryogenic temperatures has been developed. Data from NIST Copper SRM 736 measured from room temperature to 15 K will be presented along with data from many other materials including beryllium, ULE, Zerodur, and composite materials. Particular attention will be given to a study by the Space Optics Manufacturing Technology Center (SOMTC) investigating the variability of ULE and beryllium materials used in the AMSD program Approximately 20 samples of each material, tested from room temperature to below 30 K are compared as a function of billet location.

  10. Automatic Building Detection based on Supervised Classification using High Resolution Google Earth Images

    Science.gov (United States)

    Ghaffarian, S.; Ghaffarian, S.

    2014-08-01

    This paper presents a novel approach to detect the buildings by automization of the training area collecting stage for supervised classification. The method based on the fact that a 3d building structure should cast a shadow under suitable imaging conditions. Therefore, the methodology begins with the detection and masking out the shadow areas using luminance component of the LAB color space, which indicates the lightness of the image, and a novel double thresholding technique. Further, the training areas for supervised classification are selected by automatically determining a buffer zone on each building whose shadow is detected by using the shadow shape and the sun illumination direction. Thereafter, by calculating the statistic values of each buffer zone which is collected from the building areas the Improved Parallelepiped Supervised Classification is executed to detect the buildings. Standard deviation thresholding applied to the Parallelepiped classification method to improve its accuracy. Finally, simple morphological operations conducted for releasing the noises and increasing the accuracy of the results. The experiments were performed on set of high resolution Google Earth images. The performance of the proposed approach was assessed by comparing the results of the proposed approach with the reference data by using well-known quality measurements (Precision, Recall and F1-score) to evaluate the pixel-based and object-based performances of the proposed approach. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.4 % and 853 % overall pixel-based and object-based precision performances, respectively.

  11. Design of a high linearity and high gain accuracy analog baseband circuit for DAB receiver

    Science.gov (United States)

    Li, Ma; Zhigong, Wang; Jian, Xu; Yiqiang, Wu; Junliang, Wang; Mi, Tian; Jianping, Chen

    2015-02-01

    An analog baseband circuit of high linearity and high gain accuracy for a digital audio broadcasting receiver is implemented in a 0.18-μm RFCMOS process. The circuit comprises a 3rd-order active-RC complex filter (CF) and a programmable gain amplifier (PGA). An automatic tuning circuit is also designed to tune the CF's pass band. Instead of the class-A fully differential operational amplifier (FDOPA) adopted in the conventional CF and PGA design, a class-AB FDOPA is specially employed in this circuit to achieve a higher linearity and gain accuracy for its large current swing capability with lower static current consumption. In the PGA circuit, a novel DC offset cancellation technique based on the MOS resistor is introduced to reduce the settling time significantly. A reformative switching network is proposed, which can eliminate the switch resistor's influence on the gain accuracy of the PGA. The measurement result shows the gain range of the circuit is 10-50 dB with a 1-dB step size, and the gain accuracy is less than ±0.3 dB. The OIP3 is 23.3 dBm at the gain of 10 dB. Simulation results show that the settling time is reduced from 100 to 1 ms. The image band rejection is about 40 dB. It only draws 4.5 mA current from a 1.8 V supply voltage.

  12. Aerosol classification by airborne high spectral resolution lidar observations

    Directory of Open Access Journals (Sweden)

    S. Groß

    2012-10-01

    Full Text Available During four aircraft field experiments with the DLR research aircraft Falcon in 1998 (LACE, 2006 (SAMUM-1 and 2008 (SAMUM-2 and EUCAARI, airborne High Spectral Resolution Lidar (HSRL and in situ measurements of aerosol microphysical and optical properties were performed. Altogether, the properties of six different aerosol types and aerosol mixtures – Saharan mineral dust, Saharan dust mixtures, Canadian biomass burning aerosol, African biomass burning aerosol, anthropogenic pollution aerosol, and marine aerosol have been studied. On the basis of this extensive HSRL data set, we present an aerosol classification scheme which is also capable to identify mixtures of different aerosol types. We calculated mixing lines that allowed us to determine the contributing aerosol types. The aerosol classification scheme was validated with in-situ measurements and backward trajectory analyses. Our results demonstrate that the developed aerosol mask is capable to identify complex stratifications with different aerosol types throughout the atmosphere.

  13. Aerosol classification by airborne high spectral resolution lidar observations

    Science.gov (United States)

    Groß, S.; Esselborn, M.; Weinzierl, B.; Wirth, M.; Fix, A.; Petzold, A.

    2013-03-01

    During four aircraft field experiments with the DLR research aircraft Falcon in 1998 (LACE), 2006 (SAMUM-1) and 2008 (SAMUM-2 and EUCAARI), airborne High Spectral Resolution Lidar (HSRL) and in situ measurements of aerosol microphysical and optical properties were performed. Altogether, the properties of six different aerosol types and aerosol mixtures - Saharan mineral dust, Saharan dust mixtures, Canadian biomass burning aerosol, African biomass burning mixture, anthropogenic pollution aerosol, and marine aerosol have been studied. On the basis of this extensive HSRL data set, we present an aerosol classification scheme which is also capable to identify mixtures of different aerosol types. We calculated mixing lines that allowed us to determine the contributing aerosol types. The aerosol classification scheme was supported by backward trajectory analysis and validated with in-situ measurements. Our results demonstrate that the developed aerosol mask is capable to identify complex stratifications with different aerosol types throughout the atmosphere.

  14. Frequency Comparison of Two High-Accuracy Al+ Optical Clocks

    CERN Document Server

    Chou, C -W; Koelemeij, J C J; Wineland, D J; Rosenband, T

    2009-01-01

    We have constructed an optical clock with a fractional frequency inaccuracy of 8.6e-18, based on quantum logic spectroscopy of an Al+ ion. A simultaneously trapped Mg+ ion serves to sympathetically laser-cool the Al+ ion and detect its quantum state. The frequency of the 1S0->3P0 clock transition is compared to that of a previously constructed Al+ optical clock with a statistical measurement uncertainty of 7.0e-18. The two clocks exhibit a relative stability of 2.8e-15/ sqrt(tau), and a fractional frequency difference of -1.8e-17, consistent with the accuracy limit of the older clock.

  15. Frequency Comparison of Two High-Accuracy Al+ Optical Clocks

    Science.gov (United States)

    Chou, C. W.; Hume, D. B.; Koelemeij, J. C. J.; Wineland, D. J.; Rosenband, T.

    2010-02-01

    We have constructed an optical clock with a fractional frequency inaccuracy of 8.6×10-18, based on quantum logic spectroscopy of an Al+ ion. A simultaneously trapped Mg+ ion serves to sympathetically laser cool the Al+ ion and detect its quantum state. The frequency of the S01↔P03 clock transition is compared to that of a previously constructed Al+ optical clock with a statistical measurement uncertainty of 7.0×10-18. The two clocks exhibit a relative stability of 2.8×10-15τ-1/2, and a fractional frequency difference of -1.8×10-17, consistent with the accuracy limit of the older clock.

  16. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    Science.gov (United States)

    Heidarinejad, Mohammad

    This dissertation develops rapid and accurate building energy simulations based on a building classification that identifies and focuses modeling efforts on most significant heat transfer processes. The building classification identifies energy use patterns and their contributing parameters for a portfolio of buildings. The dissertation hypothesis is "Building classification can provide minimal required inputs for rapid and accurate energy simulations for a large number of buildings". The critical literature review indicated there is lack of studies to (1) Consider synoptic point of view rather than the case study approach, (2) Analyze influence of different granularities of energy use, (3) Identify key variables based on the heat transfer processes, and (4) Automate the procedure to quantify model complexity with accuracy. Therefore, three dissertation objectives are designed to test out the dissertation hypothesis: (1) Develop different classes of buildings based on their energy use patterns, (2) Develop different building energy simulation approaches for the identified classes of buildings to quantify tradeoffs between model accuracy and complexity, (3) Demonstrate building simulation approaches for case studies. Penn State's and Harvard's campus buildings as well as high performance LEED NC office buildings are test beds for this study to develop different classes of buildings. The campus buildings include detailed chilled water, electricity, and steam data, enabling to classify buildings into externally-load, internally-load, or mixed-load dominated. The energy use of the internally-load buildings is primarily a function of the internal loads and their schedules. Externally-load dominated buildings tend to have an energy use pattern that is a function of building construction materials and outdoor weather conditions. However, most of the commercial medium-sized office buildings have a mixed-load pattern, meaning the HVAC system and operation schedule dictate

  17. The accuracy of echocardiography versus surgical and pathological classification of patients with ruptured mitral chordae tendineae: a large study in a Chinese cardiovascular center

    Directory of Open Access Journals (Sweden)

    Bai Zhigang

    2011-07-01

    Full Text Available Abstract Background The accuracy of echocardiography versus surgical and pathological classification of patients with ruptured mitral chordae tendineae (RMCT has not yet been investigated with a large study. Methods Clinical, hemodynamic, surgical, and pathological findings were reviewed for 242 patients with a preoperative diagnosis of RMCT that required mitral valvular surgery. Subjects were consecutive in-patients at Fuwai Hospital in 2002-2008. Patients were evaluated by thoracic echocardiography (TTE and transesophageal echocardiography (TEE. RMCT cases were classified by location as anterior or posterior, and classified by degree as partial or complete RMCT, according to surgical findings. RMCT cases were also classified by pathology into four groups: myxomatous degeneration, chronic rheumatic valvulitis (CRV, infective endocarditis and others. Results Echocardiography showed that most patients had a flail mitral valve, moderate to severe mitral regurgitation, a dilated heart chamber, mild to moderate pulmonary artery hypertension and good heart function. The diagnostic accuracy for RMCT was 96.7% for TTE and 100% for TEE compared with surgical findings. Preliminary experiments demonstrated that the sensitivity and specificity of diagnosing anterior, posterior and partial RMCT were high, but the sensitivity of diagnosing complete RMCT was low. Surgical procedures for RMCT depended on the location of ruptured chordae tendineae, with no relationship between surgical procedure and complete or partial RMCT. The echocardiographic characteristics of RMCT included valvular thickening, extended subvalvular chordae, echo enhancement, abnormal echo or vegetation, combined with aortic valve damage in the four groups classified by pathology. The incidence of extended subvalvular chordae in the myxomatous group was higher than that in the other groups, and valve thickening in combination with AV damage in the CRV group was higher than that in the other

  18. Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps

    Science.gov (United States)

    Zlinszky, A.; Kania, A.

    2016-06-01

    Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.

  19. Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas

    Science.gov (United States)

    Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi

    2016-06-01

    The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  20. Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy.

    Science.gov (United States)

    Combrisson, Etienne; Jerbi, Karim

    2015-07-30

    Machine learning techniques are increasingly used in neuroscience to classify brain signals. Decoding performance is reflected by how much the classification results depart from the rate achieved by purely random classification. In a 2-class or 4-class classification problem, the chance levels are thus 50% or 25% respectively. However, such thresholds hold for an infinite number of data samples but not for small data sets. While this limitation is widely recognized in the machine learning field, it is unfortunately sometimes still overlooked or ignored in the emerging field of brain signal classification. Incidentally, this field is often faced with the difficulty of low sample size. In this study we demonstrate how applying signal classification to Gaussian random signals can yield decoding accuracies of up to 70% or higher in two-class decoding with small sample sets. Most importantly, we provide a thorough quantification of the severity and the parameters affecting this limitation using simulations in which we manipulate sample size, class number, cross-validation parameters (k-fold, leave-one-out and repetition number) and classifier type (Linear-Discriminant Analysis, Naïve Bayesian and Support Vector Machine). In addition to raising a red flag of caution, we illustrate the use of analytical and empirical solutions (binomial formula and permutation tests) that tackle the problem by providing statistical significance levels (p-values) for the decoding accuracy, taking sample size into account. Finally, we illustrate the relevance of our simulations and statistical tests on real brain data by assessing noise-level classifications in Magnetoencephalography (MEG) and intracranial EEG (iEEG) baseline recordings. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments

    Science.gov (United States)

    Li, Manchun; Ma, Lei; Blaschke, Thomas; Cheng, Liang; Tiede, Dirk

    2016-07-01

    Geographic Object-Based Image Analysis (GEOBIA) is becoming more prevalent in remote sensing classification, especially for high-resolution imagery. Many supervised classification approaches are applied to objects rather than pixels, and several studies have been conducted to evaluate the performance of such supervised classification techniques in GEOBIA. However, these studies did not systematically investigate all relevant factors affecting the classification (segmentation scale, training set size, feature selection and mixed objects). In this study, statistical methods and visual inspection were used to compare these factors systematically in two agricultural case studies in China. The results indicate that Random Forest (RF) and Support Vector Machines (SVM) are highly suitable for GEOBIA classifications in agricultural areas and confirm the expected general tendency, namely that the overall accuracies decline with increasing segmentation scale. All other investigated methods except for RF and SVM are more prone to obtain a lower accuracy due to the broken objects at fine scales. In contrast to some previous studies, the RF classifiers yielded the best results and the k-nearest neighbor classifier were the worst results, in most cases. Likewise, the RF and Decision Tree classifiers are the most robust with or without feature selection. The results of training sample analyses indicated that the RF and adaboost. M1 possess a superior generalization capability, except when dealing with small training sample sizes. Furthermore, the classification accuracies were directly related to the homogeneity/heterogeneity of the segmented objects for all classifiers. Finally, it was suggested that RF should be considered in most cases for agricultural mapping.

  2. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  3. Sensitivity analysis for high accuracy proximity effect correction

    Science.gov (United States)

    Thrun, Xaver; Browning, Clyde; Choi, Kang-Hoon; Figueiro, Thiago; Hohle, Christoph; Saib, Mohamed; Schiavone, Patrick; Bartha, Johann W.

    2015-10-01

    A sensitivity analysis (SA) algorithm was developed and tested to comprehend the influences of different test pattern sets on the calibration of a point spread function (PSF) model with complementary approaches. Variance-based SA is the method of choice. It allows attributing the variance of the output of a model to the sum of variance of each input of the model and their correlated factors.1 The objective of this development is increasing the accuracy of the resolved PSF model in the complementary technique through the optimization of test pattern sets. Inscale® from Aselta Nanographics is used to prepare the various pattern sets and to check the consequences of development. Fraunhofer IPMS-CNT exposed the prepared data and observed those to visualize the link of sensitivities between the PSF parameters and the test pattern. First, the SA can assess the influence of test pattern sets for the determination of PSF parameters, such as which PSF parameter is affected on the employments of certain pattern. Secondly, throughout the evaluation, the SA enhances the precision of PSF through the optimization of test patterns. Finally, the developed algorithm is able to appraise what ranges of proximity effect correction is crucial on which portion of a real application pattern in the electron beam exposure.

  4. Urban landscape classification using Chinese advanced high-resolution satellite imagery and an object-oriented multi-variable model

    Institute of Scientific and Technical Information of China (English)

    Li-gang MA; Jin-song DENG; Huai YANG; Yang HONG; Ke WANG

    2015-01-01

    The Chinese ZY-1 02C satellite is one of the most advanced high-resolution earth observation systems designed for terrestrial resource monitoring. Its capability for comprehensive landscape classification, especially in urban areas, has been under constant study. In view of the limited spectral resolution of the ZY-1 02C satellite (three bands), and the complexity and hetero-geneity across urban environments, we attempt to test its performance of urban landscape classification by combining a multi-variable model with an object-oriented approach. The multiple variables including spectral reflection, texture, spatial autocorre-lation, impervious surface fraction, vegetation, and geometry indexes were first calculated and selected using forward stepwise linear discriminant analysis and applied in the following object-oriented classification process. Comprehensive accuracy as-sessment which adopts traditional error matrices with stratified random samples and polygon area consistency (PAC) indexes was then conducted to examine the real area agreement between a classified polygon and its references. Results indicated an overall classification accuracy of 92.63%and a kappa statistic of 0.9124. Furthermore, the proposed PAC index showed that more than 82%of all polygons were correctly classified. Misclassification occurred mostly between residential area and barren/farmland. The presented method and the Chinese ZY-1 02C satellite imagery are robust and effective for urban landscape classification.

  5. Towards a multimodal brain-computer interface: combining fNIRS and fTCD measurements to enable higher classification accuracy.

    Science.gov (United States)

    Faress, Ahmed; Chau, Tom

    2013-08-15

    Previous brain-computer interface (BCI) research has largely focused on single neuroimaging modalities such as near-infrared spectroscopy (NIRS) or transcranial Doppler ultrasonography (TCD). However, multimodal brain-computer interfaces, which combine signals from different brain modalities, have been suggested as a potential means of improving the accuracy of BCI systems. In this paper, we compare the classification accuracies attainable using NIRS signals alone, TCD signals alone, and a combination of NIRS and TCD signals. Nine able-bodied subjects (mean age=25.7) were recruited and simultaneous measurements were made with NIRS and TCD instruments while participants were prompted to perform a verbal fluency task or to remain at rest, within the context of a block-stimulus paradigm. Using Linear Discriminant Analysis, the verbal fluency task was classified at mean accuracies of 76.1±9.9%, 79.4±10.3%, and 86.5±6.0% using NIRS, TCD, and NIRS-TCD systems respectively. In five of nine participants, classification accuracies with the NIRS-TCD system were significantly higher (paccuracy of future brain-computer interfaces. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Distributed High Accuracy Peer-to-Peer Localization in Mobile Multipath Environments

    CERN Document Server

    Ekambaram, Venkatesan

    2010-01-01

    In this paper we consider the problem of high accuracy localization of mobile nodes in a multipath-rich environment where sub-meter accuracies are required. We employ a peer to peer framework where the vehicles/nodes can get pairwise multipath-degraded ranging estimates in local neighborhoods together with a fixed number of anchor nodes. The challenge is to overcome the multipath-barrier with redundancy in order to provide the desired accuracies especially under severe multipath conditions when the fraction of received signals corrupted by multipath is dominating. We invoke a message passing analytical framework based on particle filtering and reveal its high accuracy localization promise through simulations.

  7. Automated Classification and Correlation of Drill Cores using High-Resolution Hyperspectral Images and Supervised Pattern Classification Algorithms. Applications to Paleoseismology

    Science.gov (United States)

    Ragona, D. E.; Minster, B.; Rockwell, T.; Jasso, H.

    2006-12-01

    The standard methodology to describe, classify and correlate geologic materials in the field or lab rely on physical inspection of samples, sometimes with the assistance of conventional analytical techniques (e. g. XRD, microscopy, particle size analysis). This is commonly both time-consuming and inherently subjective. Many geological materials share identical visible properties (e.g. fine grained materials, alteration minerals) and therefore cannot be mapped using the human eye alone. Recent investigations have shown that ground- based hyperspectral imaging provides an effective method to study and digitally store stratigraphic and structural data from cores or field exposures. Neural networks and Naive Bayesian classifiers supply a variety of well-established techniques towards pattern recognition, especially for data examples with high- dimensionality input-outputs. In this poster, we present a new methodology for automatic mapping of sedimentary stratigraphy in the lab (drill cores, samples) or the field (outcrops, exposures) using short wave infrared (SWIR) hyperspectral images and these two supervised classification algorithms. High-spatial/spectral resolution data from large sediment samples (drill cores) from a paleoseismic excavation site were collected using a portable hyperspectral scanner with 245 continuous channels measured across the 960 to 2404 nm spectral range. The data were corrected for geometric and radiometric distortions and pre-processed to obtain reflectance at each pixel of the images. We built an example set using hundreds of reflectance spectra collected from the sediment core images. The examples were grouped into eight classes corresponding to materials found in the samples. We constructed two additional example sets by computing the 2-norm normalization, the derivative of the smoothed original reflectance examples. Each example set was divided into four subsets: training, training test, verification and validation. A multi

  8. High-accuracy Subdaily ERPs from the IGS

    Science.gov (United States)

    Ray, J. R.; Griffiths, J.

    2012-04-01

    Since November 2000 the International GNSS Service (IGS) has published Ultra-rapid (IGU) products for near real-time (RT) and true real-time applications. They include satellite orbits and clocks, as well as Earth rotation parameters (ERPs) for a sliding 48-hr period. The first day of each update is based on the most recent GPS and GLONASS observational data from the IGS hourly tracking network. At the time of release, these observed products have an initial latency of 3 hr. The second day of each update consists of predictions. So the predictions between about 3 and 9 hr into the second half are relevant for true RT uses. Originally updated twice daily, the IGU products since April 2004 have been issued every 6 hr, at 3, 9, 15, and 21 UTC. Up to seven Analysis Centers (ACs) contribute to the IGU combinations. Two sets of ERPs are published with each IGU update, observed values at the middle epoch of the first half and predicted values at the middle epoch of the second half. The latency of the near RT ERPs is 15 hr while the predicted ERPs, based on projections of each AC's most recent determinations, are issued 9 hr ahead of their reference epoch. While IGU ERPs are issued every 6 hr, each set represents an integrated estimate over the surrounding 24 hr. So successive values are temporally correlated with about 75% of the data being common; this fact should be taken into account in user assimilations. To evaluate the accuracy of these near RT and predicted ERPs, they have been compared to the IGS Final ERPs, available about 11 to 17 d after data collection. The IGU products improved dramatically in the earlier years but since about 2008.0 the performance has been stable and excellent. During the last three years, RMS differences for the observed IGU ERPs have been about 0.036 mas and 0.0101 ms for each polar motion component and LOD respectively. (The internal precision of the reference IGS ERPs over the same period is about 0.016 mas for polar motion and 0

  9. High spatial resolution image object classification for terrestrial oil spill contamination mapping in West Siberia

    Science.gov (United States)

    Hese, S.; Schmullius, C.

    2009-04-01

    This work is a part of the OSCaR pilot study (Oil Spill Contamination mapping in Russia). A synergetic concept for an object based and multi temporal mapping and classification system for terrestrial oil spill pollution using a test area in West Siberia is presented. An object oriented image classification system is created to map contaminated soils, vegetation and changes in the oil exploration well infrastructure in high resolution data. Due to the limited spectral resolution of Quickbird data context information and image object structure are used as additional features building a structural object knowledge base for the area. The distance of potentially polluted areas to industrial land use and infrastructure objects is utilized to classify crude oil contaminated surfaces. Additionally the potential of Landsat data for dating of oil spill events using change indicators is tested with multi temporal Landsat data from 1987, 1995 and 2001. OSCaR defined three sub-projects: (1) high resolution mapping of crude oil contaminated surfaces, (2) mapping of industrial infrastructure change, (3) dating of oil spill events using multi temporal Landsat data. Validation of the contamination mapping results has been done with field data from Russian experts provided by the Yugra State University in Khanty-Mansiyskiy. The developed image object structure classification system has shown good results for the severely polluted areas with good overall classification accuracy. However it has also revealed the need for direct mapping of hydrocarbon substances. Oil spill event dating with Landsat data was very much limited by the low spatial resolution of Landsat TM 5 data, small scale character of oil spilled surfaces and limited information about oil spill dates.

  10. A study for high accuracy real-time 3D ultrasonic location system.

    Science.gov (United States)

    Zhou, Ping; Ha, Zhang; Zhou, Kangyuan

    2006-12-22

    We discussed a high accuracy real-time 3D ultrasonic location system in this article. The signal received was sampled after it passed the TGC and the logarithmic amplifier. Inside the DSP, we used the dynamic threshold tracing technique to improve the accuracy. The result was processed with Weighted Arithmetic Average. By testing the 40 kHz 3D location system, we have arrived at the accuracy of 1 cm.

  11. Accuracy of Handheld Blood Glucose Meters at High Altitude

    NARCIS (Netherlands)

    de Mol, Pieter; Krabbe, Hans G.; de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.

    2010-01-01

    Background: Due to increasing numbers of people with diabetes taking part in extreme sports (e. g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior st

  12. Development of high accuracy and resolution geoid and gravity maps

    Science.gov (United States)

    Gaposchkin, E. M.

    1986-01-01

    Precision satellite to satellite tracking can be used to obtain high precision and resolution maps of the geoid. A method is demonstrated to use data in a limited region to map the geopotential at the satellite altitude. An inverse method is used to downward continue the potential to the Earth surface. The method is designed for both satellites in the same low orbit.

  13. [The role of RPA classification in the treatment of high-grade gliomas].

    Science.gov (United States)

    Izmaĭlov, T R; Pan'shin, G A; Datsenko, P V; Zotov, V K

    2012-01-01

    One of the main gliomas treatment programs development criteria is still the morphology, the RPA classification with risk factors developed for high-grade tumors is rarely taken into consideration. In our study shows a high value on the criterion of overall survival identified in the classification of the six RPA classes. The most important factors in the RPA classification are patient's age and the Karnofsky performance scale value. RPA classification can be useful for new treatment strategies development.

  14. High-Resolution Remote Sensing Data Classification over Urban Areas Using Random Forest Ensemble and Fully Connected Conditional Random Field

    Directory of Open Access Journals (Sweden)

    Xiaofeng Sun

    2017-08-01

    Full Text Available As an intermediate step between raw remote sensing data and digital maps, remote sensing data classification has been a challenging and long-standing problem in the remote sensing research community. In this work, an automated and effective supervised classification framework is presented for classifying high-resolution remote sensing data. Specifically, the presented method proceeds in three main stages: feature extraction, classification, and classified result refinement. In the feature extraction stage, both multispectral images and 3D geometry data are used, which utilizes the complementary information from multisource data. In the classification stage, to tackle the problems associated with too many training samples and take full advantage of the information in the large-scale dataset, a random forest (RF ensemble learning strategy is proposed by combining several RF classifiers together. Finally, an improved fully connected conditional random field (FCCRF graph model is employed to derive the contextual information to refine the classification results. Experiments on the ISPRS Semantic Labeling Contest dataset show that the presented 3-stage method achieves 86.9% overall accuracy, which is a new state-of-the-art non-CNN (convolutional neural networks-based classification method.

  15. High accuracy magnetic field sensors with wide operation temperature range

    Science.gov (United States)

    Vasil'evskii, I. S.; Vinichenko, A. N.; Rubakin, D. I.; Bolshakova, I. A.; Kargin, N. I.

    2016-10-01

    n+InAs(Si) epitaxial thin films heavily doped by silicon and Hall effect magnetic field sensors based on this structures have been fabricated and studied. We have demonstrated the successful formation of highly doped InAs thin films (∼100 nm) with the different intermediate layer arrangement and appropriate electron mobility values. Hall sensors performance parameters have been measured in wide temperature range. Obtained sensitivity varied from 1 to 40 Ω/T, while the best linearity and lower temperature coefficient have been found in the higher doped samples with lower electron mobility. We attribute this to the electron system degeneracy and decreased phonon contribution to electron mobility and resistance.

  16. A High Accuracy Method for Semi-supervised Information Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.

    2007-04-22

    Customization to specific domains of dis-course and/or user requirements is one of the greatest challenges for today’s Information Extraction (IE) systems. While demonstrably effective, both rule-based and supervised machine learning approaches to IE customization pose too high a burden on the user. Semi-supervised learning approaches may in principle offer a more resource effective solution but are still insufficiently accurate to grant realistic application. We demonstrate that this limitation can be overcome by integrating fully-supervised learning techniques within a semi-supervised IE approach, without increasing resource requirements.

  17. Traffic Sign Recognition with High Accuracy Using Mixture of Experts

    Directory of Open Access Journals (Sweden)

    Reza Azad

    2014-06-01

    Full Text Available Traffic signs provide the driver various information for safe and efficient navigation. Automatic recognition of traffic signs is, therefore, important for automated driving or driver assistance systems.In this paper, a new and efficient traffic sign recognition system based on extracting diverse feature set, and applying mixture of experts'architecture on the extracted featuresis proposed.In the result part, the proposed approach is evaluated on the German traffic sign recognition and Grigorescu traffic signsbenchmark and high recognition rate is achieved.Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in traffic sign recognition that is the recognition rate of 99.94% for the training set and 98.50% for the test set.In addition, experimental results have demonstrated our method robust in successful recognition of traffic signs even with variant lighting.

  18. Gated viewing and high-accuracy three-dimensional laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a fast and high-accuracy three-dimensional (3-D) imaging laser radar that can achieve better than 1 mm range accuracy for half a million pixels in less than 1 s. Our technique is based on range-gating segmentation. We combine the advantages of gated viewing with our new fast...

  19. Gated viewing and high-accuracy three-dimensional laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a fast and high-accuracy three-dimensional (3-D) imaging laser radar that can achieve better than 1 mm range accuracy for half a million pixels in less than 1 s. Our technique is based on range-gating segmentation. We combine the advantages of gated viewing with our new fast...

  20. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  1. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  2. The accuracy of QCD perturbation theory at high energies

    CERN Document Server

    Dalla Brida, Mattia; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2016-01-01

    We discuss the determination of the strong coupling $\\alpha_\\mathrm{\\overline{MS}}^{}(m_\\mathrm{Z})$ or equivalently the QCD $\\Lambda$-parameter. Its determination requires the use of perturbation theory in $\\alpha_s(\\mu)$ in some scheme, $s$, and at some energy scale $\\mu$. The higher the scale $\\mu$ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the $\\Lambda$-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to $\\alpha_s = 0.1$ and below. We find that perturbation theory is very accurate there, yielding a three percent error in the $\\Lambda$-parameter, while data around $\\alpha_s \\approx 0.2$ is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  3. Methodology of High Accuracy and Resolution 3D Geological Model Generation and Application

    Institute of Scientific and Technical Information of China (English)

    吴键; 曹代勇; 邓爱居; 李东津; 蒋涛; 翟光华

    2004-01-01

    By generating a high accuracy and high resolution geological model in Liuchu oil field, the technique of geological modeling is expanded and involved in primary geological study, making the sand bodies and reservoir be easily described in detail. The 3D visualization and 3D interactive editing of geological structure model are the key for modeling procedure. And a high accuracy and resolution geological model has been well applied in optimizing the production scheme.

  4. Improving classification accuracy of spectrally similar tree species: a complex case study in the Kruger National Park

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available -species class variability can be reduced compared to the between-species class variability. Furthermore, two classification approaches with spectral angle mapper: (i) using a spectral library composed of one spectrum (endmember) per species and (ii) a multiple...

  5. Use of high dimensional model representation in dimensionality reduction: application to hyperspectral image classification

    Science.gov (United States)

    Taşkin, Gülşen

    2016-05-01

    Recently, information extraction from hyperspectral images (HI) has become an attractive research area for many practical applications in earth observation due to the fact that HI provides valuable information with a huge number of spectral bands. In order to process such a huge amount of data in an effective way, traditional methods may not fully provide a satisfactory performance because they do not mostly consider high dimensionality of the data which causes curse of dimensionality also known as Hughes phenomena. In case of supervised classification, a poor generalization performance is achieved as a consequence resulting in availability of limited training samples. Therefore, advance methods accounting for the high dimensionality need to be developed in order to get a good generalization capability. In this work, a method of High Dimensional Model Representation (HDMR) was utilized for dimensionality reduction, and a novel feature selection method was introduced based on global sensitivity analysis. Several implementations were conducted with hyperspectral images in comparison to state-of-art feature selection algorithms in terms of classification accuracy, and the results showed that the proposed method outperforms the other feature selection methods even with all considered classifiers, that are support vector machines, Bayes, and decision tree j48.

  6. Analyzing the diagnostic accuracy of the causes of spinal pain at neurology hospital in accordance with the International Classification of Diseases

    Directory of Open Access Journals (Sweden)

    I. G. Mikhailyuk

    2014-01-01

    Full Text Available Spinal pain is of great socioeconomic significance as it is widely prevalent and a common cause of disability. However, the diagnosis of its true causes frequently leads to problems. A study has been conducted to evaluate the accuracy of a clinical diagnosis and its coding in conformity with the International Classification of Diseases. The diagnosis of vertebral osteochondrosis and the hypodiagnosis of nonspecific and nonvertebrogenic pain syndromes have been found to be unreasonably widely used. Ways to solve these problems have been proposed, by applying approaches to diagnosing the causes of spinal pain in accordance with international practice.

  7. High-speed, high-accuracy large range 3D measurement

    Science.gov (United States)

    An, Yatong; Zhang, Song

    2017-05-01

    This paper presents such a high-speed, high-accuracy structured light technique that could achieve large range 3D shape measurement. The enabling method is our recently proposed system calibration that splits the calibration process into two stages. Specifically, we calibrate the intrinsic parameters at a near position with a regular size yet precisely fabricated calibration target, and then calibrate the extrinsic parameters with the assistance of an additional large range yet low accuracy low cost 3D scanner (i.e., Kinect). We developed a system that achieved 500 Hz with a resolution 2304 × 1400. The field of view (FOV) of our structured light system is 0.9 m(W) × 1.4 m(H) × 0.8 m(D). Our experimental data demonstrated that such a large range structured light system can achieve an mean error of 0.13 mm with a standard deviation of 1.18 mm by measuring a 304.8 mm diameter sphere. We further experimentally demonstrated that proposed method can simultaneously measure multiple objects or large dynamically changing objects.

  8. High accuracy of arterial spin labeling perfusion imaging in differentiation of pilomyxoid from pilocytic astrocytoma

    Energy Technology Data Exchange (ETDEWEB)

    Nabavizadeh, S.A.; Assadsangabi, R.; Hajmomenian, M.; Vossough, A. [Perelman School of Medicine of the University of Pennsylvania, Department of Radiology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States); Santi, M. [Perelman School of Medicine of the University of Pennsylvania, Department of Pathology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States)

    2015-05-01

    Pilomyxoid astrocytoma (PMA) is a relatively new tumor entity which has been added to the 2007 WHO Classification of tumors of the central nervous system. The goal of this study is to utilize arterial spin labeling (ASL) perfusion imaging to differentiate PMA from pilocytic astrocytoma (PA). Pulsed ASL and conventional MRI sequences of patients with PMA and PA in the past 5 years were retrospectively evaluated. Patients with history of radiation or treatment with anti-angiogenic drugs were excluded. A total of 24 patients (9 PMA, 15 PA) were included. There were statistically significant differences between PMA and PA in mean tumor/gray matter (GM) cerebral blood flow (CBF) ratios (1.3 vs 0.4, p < 0.001) and maximum tumor/GM CBF ratio (2.3 vs 1, p < 0.001). Area under the receiver operating characteristic (ROC) curves for differentiation of PMA from PA was 0.91 using mean tumor CBF, 0.95 using mean tumor/GM CBF ratios, and 0.89 using maximum tumor/GM CBF. Using a threshold value of 0.91, the mean tumor/GM CBF ratio was able to diagnose PMA with 77 % sensitivity, 100 % specificity, and a threshold value of 0.7, provided 88 % sensitivity and 86 % specificity. There was no statistically significant difference between the two tumors in enhancement pattern (p = 0.33), internal architecture (p = 0.15), or apparent diffusion coefficient (ADC) values (p = 0.07). ASL imaging has high accuracy in differentiating PMA from PA. The result of this study may have important applications in prognostication and treatment planning especially in patients with less accessible tumors such as hypothalamic-chiasmatic gliomas. (orig.)

  9. Highly comparative, feature-based time-series classification

    CERN Document Server

    Fulcher, Ben D

    2014-01-01

    A highly comparative, feature-based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series. These features are derived from across the scientific time-series analysis literature, and include summaries of time series in terms of their correlation structure, distribution, entropy, stationarity, scaling properties, and fits to a range of time-series models. After computing thousands of features for each time series in a training set, those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier. The resulting feature-based classifiers automatically learn the differences between classes using a reduced number of time-series properties, and circumvent the need to calculate distances between time series. Representing time series in this way results in orders of magnitude of dimensionality reduction, allowing the method to perform well on ve...

  10. High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS

    OpenAIRE

    O.S.Kushnir; O.A.Bevz; O.G.Vlokh

    2000-01-01

    Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.

  11. High-accuracy interferometric measurements of flatness and parallelism of a step gauge

    CSIR Research Space (South Africa)

    Kruger, OA

    2001-01-01

    Full Text Available for the calibration of step gauges to a high accuracy. A system was also developed for interferometric measurements of the flatness and parallelism of gauge block faces for use in uncertainty calculations....

  12. High Accuracy Reference Network (HARN), Points generated from coordinates supplied by NGS, Published in 1993, MARIS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This High Accuracy Reference Network (HARN) dataset, was produced all or in part from Field Survey/GPS information as of 1993. It is described as 'Points generated...

  13. Data supporting the high-accuracy haplotype imputation using unphased genotype data as the references

    Directory of Open Access Journals (Sweden)

    Wenzhi Li

    2016-09-01

    Full Text Available The data presented in this article is related to the research article entitled “High-accuracy haplotype imputation using unphased genotype data as the references” which reports the unphased genotype data can be used as reference for haplotyping imputation [1]. This article reports different implementation generation pipeline, the results of performance comparison between different implementations (A, B, and C and between HiFi and three major imputation software tools. Our data showed that the performances of these three implementations are similar on accuracy, in which the accuracy of implementation-B is slightly but consistently higher than A and C. HiFi performed better on haplotype imputation accuracy and three other software performed slightly better on genotype imputation accuracy. These data may provide a strategy for choosing optimal phasing pipeline and software for different studies.

  14. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    Science.gov (United States)

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The…

  15. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    Science.gov (United States)

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The sample consisted…

  16. High-Order Kinetic Relaxation Schemes as High-Accuracy Poisson Solvers

    CERN Document Server

    Mendoza, M; Herrmann, H J

    2015-01-01

    We present a new approach to find accurate solutions to the Poisson equation, as obtained from the steady-state limit of a diffusion equation with strong source terms. For this purpose, we start from Boltzmann's kinetic theory and investigate the influence of higher order terms on the resulting macroscopic equations. By performing an appropriate expansion of the equilibrium distribution, we provide a method to remove the unnecessary terms up to a desired order and show that it is possible to find, with high level of accuracy, the steady-state solution of the diffusion equation for sizeable Knudsen numbers. In order to test our kinetic approach, we discretise the Boltzmann equation and solve the Poisson equation, spending up to six order of magnitude less computational time for a given precision than standard lattice Boltzmann methods.

  17. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  18. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  19. Boosting Brain Connectome Classification Accuracy in Alzheimer’s disease using Higher-Order Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    Liang eZhan

    2015-07-01

    Full Text Available Alzheimer's disease (AD is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI, are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer’s disease. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer’s Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer’s disease.

  20. HaploGrep 2: mitochondrial haplogroup classification in the era of high-throughput sequencing

    Science.gov (United States)

    Weissensteiner, Hansi; Pacher, Dominic; Kloss-Brandstätter, Anita; Forer, Lukas; Specht, Günther; Bandelt, Hans-Jürgen; Kronenberg, Florian; Salas, Antonio; Schönherr, Sebastian

    2016-01-01

    Mitochondrial DNA (mtDNA) profiles can be classified into phylogenetic clusters (haplogroups), which is of great relevance for evolutionary, forensic and medical genetics. With the extensive growth of the underlying phylogenetic tree summarizing the published mtDNA sequences, the manual process of haplogroup classification would be too time-consuming. The previously published classification tool HaploGrep provided an automatic way to address this issue. Here, we present the completely updated version HaploGrep 2 offering several advanced features, including a generic rule-based system for immediate quality control (QC). This allows detecting artificial recombinants and missing variants as well as annotating rare and phantom mutations. Furthermore, the handling of high-throughput data in form of VCF files is now directly supported. For data output, several graphical reports are generated in real time, such as a multiple sequence alignment format, a VCF format and extended haplogroup QC reports, all viewable directly within the application. In addition, HaploGrep 2 generates a publication-ready phylogenetic tree of all input samples encoded relative to the revised Cambridge Reference Sequence. Finally, new distance measures and optimizations of the algorithm increase accuracy and speed-up the application. HaploGrep 2 can be accessed freely and without any registration at http://haplogrep.uibk.ac.at. PMID:27084951

  1. Highly charged ions as a basis of optical atomic clockwork of exceptional accuracy.

    Science.gov (United States)

    Derevianko, Andrei; Dzuba, V A; Flambaum, V V

    2012-11-02

    We propose a novel class of atomic clocks based on highly charged ions. We consider highly forbidden laser-accessible transitions within the 4f(12) ground-state configurations of highly charged ions. Our evaluation of systematic effects demonstrates that these transitions may be used for building exceptionally accurate atomic clocks which may compete in accuracy with recently proposed nuclear clocks.

  2. The effect of pattern overlap on the accuracy of high resolution electron backscatter diffraction measurements

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2015-08-15

    High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.

  3. Multi-Temporal Land-Cover Classification of Agricultural Areas in Two European Regions with High Resolution Spotlight TerraSAR-X Data

    Directory of Open Access Journals (Sweden)

    Sylvia Herrmann

    2011-04-01

    Full Text Available Functioning ecosystems offer multiple services for human well-being (e.g., food, freshwater, fiber. Agriculture provides several of these services but also can cause negative impacts. Thus, it is essential to derive up-to-date information about agricultural land use and its change. This paper describes the multi-temporal classification of agricultural land use based on high resolution spotlight TerraSAR-X images. A stack of l4 dual-polarized radar images taken during the vegetation season have been used for two different study areas (North of Germany and Southeast Poland. They represent extremely diverse regions with regard to their population density, agricultural management, as well as geological and geomorphological conditions. Thereby, the transferability of the classification method for different regions is tested. The Maximum Likelihood classification is based on a high amount of ground truth samples. Classification accuracies differ in both regions. Overall accuracy for all classes for the German area is 61.78% and 39.25% for the Polish region. Accuracies improved notably for both regions (about 90% when single vegetation classes were merged into groups of classes. Such regular land use classifications, applicable for different European agricultural sites, can serve as basis for monitoring systems for agricultural land use and its related ecosystems.

  4. Retrospective assessment of interobserver agreement and accuracy in classifications and measurements in subsolid nodules with solid components less than 8mm: which window setting is better?

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Roh-Eul [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of); Hwang, Eui Jin; Yoon, Soon Ho; Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Ahn, Soyeon [Seoul National University Bundang Hospital, Medical Research Collaborating Center, Seongnam-si (Korea, Republic of)

    2017-04-15

    To compare interobserver agreements among multiple readers and accuracy for the assessment of solid components in subsolid nodules between the lung and mediastinal window settings. Seventy-seven surgically resected nodules with solid components smaller than 8 mm were included in this study. In both lung and mediastinal windows, five readers independently assessed the presence and size of solid component. Bootstrapping was used to compare the interobserver agreement between the two window settings. Imaging-pathology correlation was performed to evaluate the accuracy. There were no significant differences in the interobserver agreements between the two windows for both identification (lung windows, k = 0.51; mediastinal windows, k = 0.57) and measurements (lung windows, ICC = 0.70; mediastinal windows, ICC = 0.69) of solid components. The incidence of false negative results for the presence of invasive components and the median absolute difference between the solid component size and the invasive component size were significantly higher on mediastinal windows than on lung windows (P < 0.001 and P < 0.001, respectively). The lung window setting had a comparable reproducibility but a higher accuracy than the mediastinal window setting for nodule classifications and solid component measurements in subsolid nodules. (orig.)

  5. Accuracy of reported flash point values on material safety data sheets and the impact on product classification.

    Science.gov (United States)

    Radnoff, Diane

    2013-01-01

    Material Safety Data Sheets (MSDSs) are the foundation of worker right-to-know legislation for chemical hazards. Suppliers can use product test data to determine a product's classification. Alternatively, they may use evaluation and professional judgment based on test results for the product or a product, material, or substance with similar properties. While the criteria for classifying products under the new Globally Harmonized System of Classification and Labeling of Chemicals (GHS) are different, a similar process is followed. Neither the current Workplace Hazardous Materials Information System (WHMIS) nor GHS require suppliers to test their products to classify them. In this project 83 samples of products classified as flammable or combustible, representing a variety of industry sectors and product types, were collected. Flash points were measured and compared to the reported values on the MSDSs. The classifications of the products were then compared using the WHMIS and GHS criteria. The results of the study indicated that there were significant variations between the disclosed and measured flash point values. Overall, more than one-third of the products had flash points lower than that disclosed on the MSDS. In some cases, the measured values were more than 20°C lower than the disclosed values. This could potentially result in an underestimation regarding the flammability of the product so it is important for employers to understand the limitations in the information provided on MSDSs when developing safe work procedures and training programs in the workplace. Nearly one-fifth of the products were misclassified under the WHMIS system as combustible when the measured flash point indicated that they should be classified as flammable when laboratory measurement error was taken into account. While a similar number of products were misclassified using GHS criteria, the tendency appeared to be to "over-classify" (provide a hazard class that was more conservative

  6. Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards

    Directory of Open Access Journals (Sweden)

    Mark Plitt

    2015-01-01

    Conclusions: While individuals can be classified as having ASD with statistically significant accuracy from their rs-fMRI scans alone, this method falls short of biomarker standards. Classification methods provided further evidence that ASD functional connectivity is characterized by dysfunction of large-scale functional networks, particularly those involved in social information processing.

  7. Development of an automatic calibration device for high-accuracy low temperature thermometers

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the analysis and investigation of calibration systems for high-accuracy low temperature thermometers,a new facility for automatic calibration of high-accuracy low temperature thermometers was developed.Continuous calibration for multiple points can be made automatically with this device.According to the thermophysical characteristics of the constant-temperature block in this device,segmented Fuzzy-PID (proportional-integral-differential) algorithm was applied.The experimental results showed that the temperature fluctuation was smaller than ±0.005 K in 30 min.Therefore,this new device can fully meet the calibration requirement of high-precision low temperature thermometers.

  8. Study on High Accuracy Topographic Mapping via UAV-based Images

    Science.gov (United States)

    Chi, Yun-Yao; Lee, Ya-Fen; Tsai, Shang-En

    2016-10-01

    Unmanned aerial vehicle (UAV) provides a promising tool for the acquisition of such multi-temporal aerial stereo photos and high-resolution digital surface models. Recently, the flight of UAVs operates with high degrees of autonomy by the global position system and onboard digit camera and computer. The UAV-based mapping can be obtained faster and cheaper, but its accuracy is anxious. This paper aims to identify the integration ability of high accuracy topographic map via the image of quad-rotors UAV and ground control points (GCPs). The living survey data is collected in the Errn river basins area in Tainan, Taiwan. The high accuracy UAV-based topographic in the study area is calibrated by the local coordinate of GCPs using the total station with the accuracy less than 1/2000. The comparison results show the accuracy of UAV-based topographic is accepted by overlapping. The results can be a reference for the practice works of mapping survey in earth.

  9. The Utilization of Classifications in High-Energy Astrophysics Experiments

    Science.gov (United States)

    Atwood, Bill

    2012-03-01

    The history of high-energy gamma observations stretches back several decades. But it was with the launch of the Energetic Gamma Ray Experiment Telescope (EGRET) in 1991 onboard the Compton Gamma Ray Observatory (CGRO) [1], that the field entered a new era of discovery. At the high-energy end of the electromagnetic spectrum, incoming particles of light, photons, interact with matter mainly by producing electron-positron pairs and this process dominates above an energy of 10-30MeV depending on the material. To a high degree the directionality of the incoming gamma ray is reflected in the e+ and e-, and hence the detection of the trajectories of the e+e- pair can be used to infer the direction of the originating photon. Measuring these high-energy charged particles is the domain of high-energy particle physics and so it should be of little surprise that particle physicists played a significant role in the design and construction of EGRET, as well as the design and implementation of analysis methods for the resulting data. Prior to EGRET, only a handful of sources in the sky were known as high-energy gamma-ray emitters. During EGRET's 9-years mission the final catalog included over 270 sources including new types such as Gamma Ray Bursts (GRBs). This set the stage for the next-generation mission, the Gamma ray Large Area Space Telescope (GLAST) [2]. Very early in the EGRET mission, the realization that the high-energy gamma-ray sky was extremely interesting led to a competition to develop the next-generation instruments. The technology used in EGRET was frozen in the late 1970s and by 1992, enormous advances had been made in experimental particle physics. In particular the effort to develop solid state detectors, targeted for use at the Super Conducting Super Collider (SSC), had made the technology of silicon strip detectors (SSDs) commercially viable for use in large area arrays. Given the limitations imposed by the space environment (e.g., operate in a vacuum, scarce

  10. Novel Object-Based Filter for Improving Land-Cover Classification of Aerial Imagery with Very High Spatial Resolution

    Directory of Open Access Journals (Sweden)

    Zhiyong Lv

    2016-12-01

    Full Text Available Land cover classification using very high spatial resolution (VHSR imaging plays a very important role in remote sensing applications. However, image noise usually reduces the classification accuracy of VHSR images. Image spatial filters have been recently adopted to improve VHSR image land cover classification. In this study, a new object-based image filter using topology and feature constraints is proposed, where an object is considered as a central object and has irregular shapes and various numbers of neighbors depending on the nature of the surroundings. First, multi-scale segmentation is used to generate a homogeneous image object and extract the corresponding vectors. Then, topology and feature constraints are proposed to select the adjacent objects, which present similar materials to the central object. Third, the feature of the central object is smoothed by the average of the selected objects’ feature. This proposed approach is validated on three VHSR images, ranging from a fixed-wing aerial image to UAV images. The performance of the proposed approach is compared to a standard object-based approach (OO, object correlative index (OCI spatial feature based method, a recursive filter (RF, and a rolling guided filter (RGF, and has shown a 6%–18% improvement in overall accuracy.

  11. Enhancing the classification accuracy of steady-state visual evoked potential-based brain-computer interfaces using phase constrained canonical correlation analysis

    Science.gov (United States)

    Pan, Jie; Gao, Xiaorong; Duan, Fang; Yan, Zheng; Gao, Shangkai

    2011-06-01

    In this study, a novel method of phase constrained canonical correlation analysis (p-CCA) is presented for classifying steady-state visual evoked potentials (SSVEPs) using multichannel electroencephalography (EEG) signals. p-CCA is employed to improve the performance of the SSVEP-based brain-computer interface (BCI) system using standard CCA. SSVEP response phases are estimated based on the physiologically meaningful apparent latency and are added as a reliable constraint into standard CCA. The results of EEG experiments involving 10 subjects demonstrate that p-CCA consistently outperforms standard CCA in classification accuracy. The improvement is up to 6.8% using 1-4 s data segments. The results indicate that the reliable measurement of phase information is of importance in SSVEP-based BCIs.

  12. Millimeter-Wave Airborne Interferometry for High-accuracy Topography Mapping

    Science.gov (United States)

    Moller, D.; Hensley, S.; Wu, X.; Rodriguez, E.

    2011-12-01

    sensor geometry, bandwidth and number of channels needed for SWOT cal/val cannot be met within the framework of GLISTIN-A or a similar interface to UAVSAR. To address SWOT's cal/val requirements, the Ka-band SWOT Phenomenology Airborne Radar (KaSPAR) builds upon GLISTIN-A heritage and is the primary payload of the AirSWOT program. KaSPAR is a unique system with multiple temporal and cross-track baselines to fully characterize the scattering and statistics expected from SWOT, provide data for developing classification algorithms, and understanding instrument performance over the vast variety of scenes that SWOT will encounter. Furthermore a >5km swath high-accuracy WSE mapping capability provides the framework to translate traditional point or profile measurements to the spatial framework that SWOT will measure. Specific measurements from the integrated AirSWOT assembly are 1) WSE maps over a 5km swath with <3cm mean error at 100m x 100m postings (for ocean surface at 6m/s wind speed), 2) 2-D slope maps derived from WSE maps and 3) shoreline delineation at 10m resolution. These measurements will be made at resolutions exceeding that of SWOT to better characterize corrections for the spaceborne sensor.

  13. Very high-accuracy calibration of radiation pattern and gain of a near-field probe

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav

    2014-01-01

    In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ......In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission...

  14. Ship detection and classification in high-resolution remote sensing imagery using shape-driven segmentation method

    Science.gov (United States)

    Tao, Chao; Tan, Yihua; Cai, Huajie; Tian, Jinwen

    2009-10-01

    High-resolution remote sensing imagery provides an important data source for ship detection and classification. However, due to shadow effect, noise and low-contrast between objects and background existing in this kind of data, traditional segmentation approaches have much difficulty in separating ship targets from complex sea-surface background. In this paper, we propose a novel coarse-to-fine segmentation strategy for identifying ships in 1-meter resolution imagery. This approach starts from a coarse segmentation by selecting local intensity variance as detection feature to segment ship objects from background. After roughly obtaining the regions containing ship candidates, a shape-driven level-set segmentation is used to extract precise boundary of each object which is good for the following stages such as detection and classification. Experimental results show that the proposed approach outperforms other algorithms in terms of recognition accuracy.

  15. Quantification of CT images for the classification of high- and low-risk pancreatic cysts

    Science.gov (United States)

    Gazit, Lior; Chakraborty, Jayasree; Attiyeh, Marc; Langdon-Embry, Liana; Allen, Peter J.; Do, Richard K. G.; Simpson, Amber L.

    2017-03-01

    Pancreatic cancer is the most lethal cancer with an overall 5-year survival rate of 7%1 due to the late stage at diagnosis and the ineffectiveness of current therapeutic strategies. Given the poor prognosis, early detection at a pre-cancerous stage is the best tool for preventing this disease. Intraductal papillary mucinous neoplasms (IPMN), cystic tumors of the pancreas, represent the only radiographically identifiable precursor lesion of pancreatic cancer and are known to evolve stepwise from low-to-high-grade dysplasia before progressing into an invasive carcinoma. Observation is usually recommended for low-risk (low- and intermediate-grade dysplasia) patients, while high-risk (high-grade dysplasia and invasive carcinoma) patients undergo resection; hence, patient selection is critically important in the management of pancreatic cysts.2 Radiologists use standard criteria such as main pancreatic duct size, cyst size, or presence of a solid enhancing component in the cyst to optimally select patients for surgery.3 However, these findings are subject to a radiologist's interpretation and have been shown to be inconsistent with regards to the presence of a mural nodule or solid component.4 We propose objective classification of risk groups based on quantitative imaging features extracted from CT scans. We apply new features that represent the solid component (i.e. areas of high intensity) within the cyst and extract standard texture features. An adaptive boost classifier5 achieves the best performance with area under receiver operating characteristic curve (AUC) of 0.73 and accuracy of 77.3% for texture features. The random forest classifier achieves the best performance with AUC of 0.71 and accuracy of 70.8% with the solid component features.

  16. [Study on high accuracy detection of multi-component gas in oil-immerse power transformer].

    Science.gov (United States)

    Fan, Jie; Chen, Xiao; Huang, Qi-Feng; Zhou, Yu; Chen, Gang

    2013-12-01

    In order to solve the problem of low accuracy and mutual interference in multi-component gas detection, a kind of multi-component gas detection network with high accuracy was designed. A semiconductor laser with narrow bandwidth was utilized as light source and a novel long-path gas cell was also used in this system. By taking the single sine signal to modulate the spectrum of laser and using space division multiplexing (SDM) and time division multiplexing (TDM) technique, the detection of multi-component gas was achieved. The experiments indicate that the linearity relevance coefficient is 0. 99 and the measurement relative error is less than 4%. The system dynamic response time is less than 15 s, by filling a volume of multi-component gas into the gas cell gradually. The system has advantages of high accuracy and quick response, which can be used in the fault gas on-line monitoring for power transformers in real time.

  17. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    algorithm is available in the R package msgl. Its performance scales well with the problem size as illustrated by one of the examples considered - a 50 class classification problem with 10 k features, which amounts to estimating 500 k parameters. © 2013 Elsevier Inc. All rights reserved.......The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse...... group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  18. P2P Streaming Traffic Classification in High-Speed Networks

    Institute of Scientific and Technical Information of China (English)

    Chen Luying; Cong Rong; Yang Jie; Yu Hua

    2011-01-01

    The growing P2P streaming traffic brings a variety of problems and challenges to ISP networks and service providers.A P2P streaming traffic classification method based on sampling technology is presented in this paper.By analyzing traffic statistical features and network behavior of P2P streaming,a group of flow characteristics were found,which can make P2P streaming more recognizable among other applications.Attributes from Netflow and those proposed by us are compared in terms of classification accuracy,and so are the results of different sampling rates.It is proved that the unified classification model with the proposed attributes can identify P2P streaming quickly and efficiently in the online system.Even with 1∶50 sampling rate,the recognition accuracy can be higher than 94%.Moreover,we have evaluated the CPU resources,storage capacity and time consumption before and after the sampling,it is shown that the classification model after the sampling can significantly reduce the resource requirements with the same recognition accuracy.

  19. Feature Learning Based Approach for Weed Classification Using High Resolution Aerial Images from a Digital Camera Mounted on a UAV

    Directory of Open Access Journals (Sweden)

    Calvin Hung

    2014-12-01

    Full Text Available The development of low-cost unmanned aerial vehicles (UAVs and light weight imaging sensors has resulted in significant interest in their use for remote sensing applications. While significant attention has been paid to the collection, calibration, registration and mosaicking of data collected from small UAVs, the interpretation of these data into semantically meaningful information can still be a laborious task. A standard data collection and classification work-flow requires significant manual effort for segment size tuning, feature selection and rule-based classifier design. In this paper, we propose an alternative learning-based approach using feature learning to minimise the manual effort required. We apply this system to the classification of invasive weed species. Small UAVs are suited to this application, as they can collect data at high spatial resolutions, which is essential for the classification of small or localised weed outbreaks. In this paper, we apply feature learning to generate a bank of image filters that allows for the extraction of features that discriminate between the weeds of interest and background objects. These features are pooled to summarise the image statistics and form the input to a texton-based linear classifier that classifies an image patch as weed or background. We evaluated our approach to weed classification on three weeds of significance in Australia: water hyacinth, tropical soda apple and serrated tussock. Our results showed that collecting images at 5–10 m resulted in the highest classifier accuracy, indicated by F1 scores of up to 94%.

  20. Tree Species Classification By Multiseasonal High Resolution Satellite Data

    Science.gov (United States)

    Elatawneh, Alata; Wallner, Adelheid; Straub, Christoph; Schneider, Thomas; Knoke, Thomas

    2013-12-01

    Accurate forest tree species mapping is a fundamental issue for sustainable forest management and planning. Forest tree species mapping with the means of remote sensing data is still a topic to be investigated. The Bavaria state institute of forestry is investigating the potential of using digital aerial images for forest management purposes. However, using aerial images is still cost- and time-consuming, in addition to their acquisition restrictions. The new space-born sensor generations such as, RapidEye, with a very high temporal resolution, offering multiseasonal data have the potential to improve the forest tree species mapping. In this study, we investigated the potential of multiseasonal RapidEye data for mapping tree species in a Mid European forest in Southern Germany. The RapidEye data of level A3 were collected on ten different dates in the years 2009, 2010 and 2011. For data analysis, a model was developed, which combines the Spectral Angle Mapper technique with a 10-fold- cross-validation. The analysis succeeded to differentiate four tree species; Norway spruce (Picea abies L.), Silver Fir (Abies alba Mill.), European beech (Fagus sylvatica) and Maple (Acer pseudoplatanus). The model success was evaluated using digital aerial images acquired in the year 2009 and inventory point records from 2008/09 inventory. Model results of the multiseasonal RapidEye data analysis achieved an overall accuracy of 76%. However, the success of the model was evaluated only for all the identified species and not for the individual.

  1. Analysis of Accuracy of a High-speed Mobile Platform Control System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The efficient manufacture technique involves a high-speed control of platform mobile system. A linear acutor is presented in this paper. The linear acutor is constructed as a linear stepper motor. However, to sustain both high accuracy and high speed for the position and speed control, A single-stack computer system is constructed and a special control algorithm is prescribed to controled the linear actuator continuously. In this paper, the nonlinear errors resulted from the magnetic saturation and the h...

  2. Axis-Exchanged Compensation and Gait Parameters Analysis for High Accuracy Indoor Pedestrian Dead Reckoning

    Directory of Open Access Journals (Sweden)

    Honghui Zhang

    2015-01-01

    Full Text Available Pedestrian dead reckoning (PDR is an effective way for navigation coupled with GNSS (Global Navigation Satellite System or weak GNSS signal environment like indoor scenario. However, indoor location with an accuracy of 1 to 2 meters determined by PDR based on MEMS-IMU is still very challenging. For one thing, heading estimation is an important problem in PDR because of the singularities. For another thing, walking distance estimation is also a critical problem for pedestrian walking with randomness. Based on the above two problems, this paper proposed axis-exchanged compensation and gait parameters analysis algorithm to improve the navigation accuracy. In detail, an axis-exchanged compensation factored quaternion algorithm is put forward first to overcome the singularities in heading estimation without increasing the amount of computation. Besides, real-time heading is updated by R-adaptive Kalman filter. Moreover, gait parameters analysis algorithm can be divided into two steps: cadence detection and step length estimation. Thus, a method of cadence classification and interval symmetry is proposed to detect the cadence accurately. Furthermore, a step length model adjusted by cadence is established for step length estimation. Compared to the traditional PDR navigation, experimental results showed that the error of navigation reduces 32.6%.

  3. SVM Classification of Urban High-Resolution Imagery Using Composite Kernels and Contour Information

    Directory of Open Access Journals (Sweden)

    Aissam Bekkari

    2013-08-01

    Full Text Available The classification of remote sensing images has done great forward taking into account the image’s availability with different resolutions, as well as an abundance of very efficient classification algorithms. A number of works have shown promising results by the fusion of spatial and spectral information using Support Vector Machines (SVM which are a group of supervised classification algorithms that have been recently used in the remote sensing field, however the addition of contour information to both spectral and spatial information still less explored. For this purpose we propose a methodology exploiting the properties of Mercer’s kernels to construct a family of composite kernels that easily combine multi-spectral features and Haralick texture features as data source. The composite kernel that gives the best results will be used to introduce contour information in the classification process. The proposed approach was tested on common scenes of urban imagery. The three different kernels tested allow a significant improvement of the classification performances and a flexibility to balance between the spatial and spectral information in the classifier. The experimental results indicate a global accuracy value of 93.52%, the addition of contour information, described by the Fourier descriptors, Hough transform and Zernike moments, allows increasing the obtained global accuracy by 1.61% which is very promising.

  4. Accuracy of the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) as a research tool for identification of patients with uveitis and scleritis.

    Science.gov (United States)

    Uchiyama, Eduardo; Faez, Sepideh; Nasir, Humzah; Unizony, Sebastian H; Plenge, Robert; Papaliodis, George N; Sobrin, Lucia

    2015-04-01

    To report on the accuracy of the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes for identifying patients with polymyalgia rheumatica (PMR) and concurrent noninfectious inflammatory ocular conditions in a large healthcare organization database. Queries for patients with PMR and uveitis or scleritis were executed in two general teaching hospitals' databases. Patients with ocular infections or other rheumatologic conditions were excluded. Patients with PMR and ocular inflammation were identified, and medical records were reviewed to confirm accuracy. The query identified 10,697 patients with the ICD-9-CM code for PMR and 4154 patients with the codes for noninfectious inflammatory ocular conditions. The number of patients with both PMR and noninfectious uveitis or scleritis by ICD-9-CM codes was 66. On detailed review of the charts of these 66 patients, 31 (47%) had a clinical diagnosis of PMR, 43 (65%) had noninfectious uveitis or scleritis, and only 20 (30%) had PMR with concurrent noninfectious uveitis or scleritis confirmed based on clinical notes. While the use of ICD-9-CM codes has been validated for medical research of common diseases, our results suggest that ICD-9-CM codes may be of limited value for epidemiological investigations of diseases which can be more difficult to diagnose. The ICD-9-CM codes for rarer diseases (PMR, uveitis and scleritis) did not reflect the true clinical problem in a large proportion of our patients. This is particularly true when coding is performed by physicians outside the area of specialty of the diagnosis.

  5. Machine Learning Approaches for High-resolution Urban Land Cover Classification: A Comparative Study

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL; Chandola, Varun [ORNL; Cheriyadat, Anil M [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL; Graesser, Jordan B [ORNL

    2011-01-01

    The proliferation of several machine learning approaches makes it difficult to identify a suitable classification technique for analyzing high-resolution remote sensing images. In this study, ten classification techniques were compared from five broad machine learning categories. Surprisingly, the performance of simple statistical classification schemes like maximum likelihood and Logistic regression over complex and recent techniques is very close. Given that these two classifiers require little input from the user, they should still be considered for most classification tasks. Multiple classifier systems is a good choice if the resources permit.

  6. High-accuracy C-14 measurements for atmospheric CO2 samples by AMS

    NARCIS (Netherlands)

    Meijer, H.A.J.; Pertuisot, M.H.; van der Plicht, J.

    2006-01-01

    In this paper, we investigate how to achieve high-accuracy radiocarbon measurements by accelerator mass spectrometry (ANIS) and present measurement series (performed on archived CO2) of (CO2)-C-14 between 1985 and 1991 for Point Barrow (Alaska) and the South Pole. We report in detail the measurement

  7. Further results on the operation of high-accuracy drift chambers

    NARCIS (Netherlands)

    Breskin, A.; Charpak, G.; Gabioud, B.; Sauli, F.; Trautner, N.

    Optimization of the working parameters in the drift chambers with adjustable electric fields permits stable operation and high accuracies. Full saturation of the drift velocity leads to remarkable improvements, namely a very linear space-time correlation for perpendicular tracks, and simple

  8. From journal to headline: the accuracy of climate science news in Danish high quality newspapers

    DEFF Research Database (Denmark)

    Vestergård, Gunver Lystbæk

    2011-01-01

    analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...

  9. A 1-V 15 μW High-Accuracy Temperature Switch

    NARCIS (Netherlands)

    Schinkel, D.; Boer, de R.P.; Annema, A.J.; Tuijl, van A.J.M.

    2004-01-01

    A CMOS temperature switch with uncalibrated high accuracy is presented. The circuit is based on the classical CMOS bandgap reference structure, using parasitic PNPs and a PTAT multiplier. The circuit was designed in a standard digital 0.18 m CMOS process. The temperature switch has an in-designed hy

  10. From journal to headline: the accuracy of climate science news in Danish high quality newspapers

    DEFF Research Database (Denmark)

    Vestergård, Gunver Lystbæk

    2011-01-01

    analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...

  11. Classification of Focal Prostatic Lesions on Transrectal Ultrasound (TRUS) and the Accuracy of TRUS to Diagnose Prostate Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ho Yun [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Lee, Hak Jong; Byun, Seok Soo; Lee, Sang Eun; Hong, Sung Kyu [Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Kim, Seung Hyup [Seoul National University Hospital, Seoul (Korea, Republic of)

    2009-06-15

    To improve the diagnostic efficacy of transrectal ultrasound (TRUS)-guided targeted prostatic biopsies, we have suggested the use of a new scoring system for the prediction of malignancies regarding the characteristics of focal suspicious lesions as depicted on TRUS. A total of 350 consecutive patients with or without prostate cancer who underwent targeted biopsies for 358 lesions were included in the study. The data obtained from participants were randomized into two groups; the training set (n = 240) and the test set (n = 118). The characteristics of focal suspicious lesions were evaluated for the training set and the correlation between TRUS findings and the presence of a malignancy was analyzed. Multiple logistic regression analysis was used to identify variables capable of predicting prostatic cancer. A scoring system that used a 5-point scale for better malignancy prediction was determined from the training set. Positive predictive values for malignancy prediction and the diagnostic accuracy of the scored components with the use of receiver operating characteristic curve analysis were evaluated by test set analyses. Subsequent multiple logistic regression analysis determined that shape, margin irregularity, and vascularity were factors significantly and independently associated with the presence of a malignancy. Based on the use of the scoring system for malignancy prediction derived from the significant TRUS findings and the interactions of characteristics, a positive predictive value of 80% was achieved for a score of 4 when applied to the test set. The area under the receiver operating characteristic curve (AUC) for the overall lesion score was 0.81. We have demonstrated that a scoring system for malignancy prediction developed for the characteristics of focal suspicious lesions as depicted on TRUS can help predict the outcome of TRUS-guided biopsies.

  12. Surgical accuracy in high tibial osteotomy: coronal equivalence of computer navigation and gap measurement.

    Science.gov (United States)

    Schröter, S; Ihle, C; Elson, D W; Döbele, S; Stöckle, U; Ateschrang, A

    2016-11-01

    Medial opening wedge high tibial osteotomy (MOW HTO) is now a successful operation with a range of indications, requiring an individualised approach to the choice of intended correction. This manuscript introduces the concept of surgical accuracy as the absolute deviation of the achieved correction from the intended correction, where small values represent greater accuracy. Surgical accuracy is compared in a randomised controlled trial (RCT) between gap measurement and computer navigation groups. This was a prospective RCT conducted over 3 years of 120 consecutive patients with varus malalignment and medial compartment osteoarthritis, who underwent MOW HTO. All procedures were planned with digital software. Patients were randomly assigned into gap measurement or computer navigation groups. Coronal plane alignment was judged using the mechanical tibiofemoral angle (mTFA), before and after surgery. Absolute (positive) values were calculated for surgical accuracy in each individual case. There was no significant difference in the mean intended correction between groups. The achieved mTFA revealed a small under-correction in both groups. This was attributed to a failure to account for saw blade thickness (gap measurement) and over-compensation for weight bearing (computer navigation). Surgical accuracy was 1.7° ± 1.2° (gap measurement) compared to 2.1° ± 1.4° (computer navigation) without statistical significance. The difference in tibial slope increases of 2.7° ± 3.9° (gap measurement) and 2.1° ± 3.9° (computer navigation) had statistical significance (P osteotomy for individual cases. This work is clinically relevant because coronal surgical accuracy was not superior in either group. Therefore, the increased expense and surgical time associated with navigated MOW HTO is not supported, because meticulously conducted gap measurement yields equivalent surgical accuracy. I.

  13. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  14. Development of an object-based classification model for mapping mountainous forest cover at high elevation using aerial photography

    Science.gov (United States)

    Lateb, Mustapha; Kalaitzidis, Chariton; Tompoulidou, Maria; Gitas, Ioannis

    2016-08-01

    Climate change and overall temperature increase results in changes in forest cover in high elevations. Due to the long life cycle of trees, these changes are very gradual and can be observed over long periods of time. In order to use remote sensing imagery for this purpose it needs to have very high spatial resolution and to have been acquired at least 50 years ago. At the moment, the only type of remote sensing imagery with these characteristics is historical black and white aerial photographs. This study used an aerial photograph from 1945 in order to map the forest cover at the Olympus National Park, at that date. An object-based classification (OBC) model was developed in order to classify forest and discriminate it from other types of vegetation. Due to the lack of near-infrared information, the model had to rely solely on the tone of the objects, as well as their geometric characteristics. The model functioned on three segmentation levels, using sub-/super-objects relationships and utilising vegetation density to discriminate forest and non-forest vegetation. The accuracy of the classification was assessed using 503 visually interpreted and randomly distributed points, resulting in a 92% overall accuracy. The model is using unbiased parameters that are important for differentiating between forest and non-forest vegetation and should be transferrable to other study areas of mountainous forests at high elevations.

  15. A high-accuracy surgical augmented reality system using enhanced integral videography image overlay.

    Science.gov (United States)

    Zhang, Xinran; Chen, Guowen; Liao, Hongen

    2015-01-01

    Image guided surgery has been used in clinic to improve the surgery safety and accuracy. Augmented reality (AR) technique, which can provide intuitive image guidance, has been greatly evolved these years. As one promising approach of surgical AR systems, integral videography (IV) autostereoscopic image overlay has achieved accurate fusion of full parallax guidance into surgical scene. This paper describes an image enhanced high-accuracy IV overlay system. A flexible optical image enhancement system (IES) is designed to increase the resolution and quality of IV image. Furthermore, we introduce a novel IV rendering algorithm to promote the spatial accuracy with the consideration of distortion introduced by micro lens array. Preliminary experiments validated that the image accuracy and resolution are improved with the proposed methods. The resolution of the IV image could be promoted to 1 mm for a micro lens array with pitch of 2.32 mm and IES magnification value of 0.5. The relative deviation of accuracy in depth and lateral directions are -4.68 ± 0.83% and -9.01 ± 0.42%.

  16. Horizontal Positional Accuracy of Google Earth's High-Resolution Imagery Archive.

    Science.gov (United States)

    Potere, David

    2008-12-08

    Google Earth now hosts high-resolution imagery that spans twenty percent of the Earth's landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration) by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE). Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters). The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the world's peri-urban areas.

  17. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    Science.gov (United States)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  18. Two-step Structural Design of Mesh Antennas for High Beam Pointing Accuracy

    Science.gov (United States)

    Zhang, Shuxin; Du, Jingli; Wang, Wei; Zhang, Xinghua; Zong, Yali

    2017-05-01

    A well-designed reflector surface with high beam pointing accuracy in electromagnetic performance is of practical significance to the space application of cable mesh reflector antennas. As for space requirements, circular polarizations are widely used in spaceborne antennas, which usually lead to a beam shift for offset reflectors and influence the beam pointing accuracy. A two-step structural design procedure is proposed to overcome the beam squint phenomenon for high beam pointing accuracy design of circularly polarized offset cable mesh reflectors. A simple structural optimal design and an integrated structural electromagnetic optimization are combined to alleviate the beam squint effect of circular polarizations. It is implemented by cable pretension design and adjustment to shape the offset cable mesh surface. Besides, in order to increase the efficiency of integrated optimization, an update Broyden-Fletcher-Goldfarb-Shanno (BFGS) Hessian matrix is employed in the optimization iteration with sequential quadratic programming. A circularly polarized offset cable mesh reflector is utilized to show the feasibility and effectiveness of the proposed procedure. A high beam pointing accuracy in order of 0.0001º of electromagnetic performance is achieved.

  19. Kinematic classifications of local interacting galaxies: implications for the merger/disk classifications at high-z

    CERN Document Server

    Hung, Chao-Ling; Yuan, Tiantian; Larson, Kirsten L; Casey, Caitlin M; Smith, Howard A; Sanders, D B; Kewley, Lisa J; Hayward, Christopher C

    2015-01-01

    The classification of galaxy mergers and isolated disks is key for understanding the relative importance of galaxy interactions and secular evolution during the assembly of galaxies. The kinematic properties of galaxies as traced by emission lines have been used to suggest the existence of a significant population of high-z star-forming galaxies consistent with isolated rotating disks. However, recent studies have cautioned that post-coalescence mergers may also display disk-like kinematics. To further investigate the robustness of merger/disk classifications based on kinematic properties, we carry out a systematic classification of 24 local (U)LIRGs spanning a range of galaxy morphologies: from isolated spiral galaxies, ongoing interacting systems, to fully merged remnants. We artificially redshift the WiFeS observations of these local (U)LIRGs to z=1.5 to make a realistic comparison with observations at high-z, and also to ensure that all galaxies have the same spatial sampling of ~900 pc. Using both kineme...

  20. Concurrent Validity and Classification Accuracy of the Leiter and Leiter-R in Low Functioning Children with Autism.

    Science.gov (United States)

    Tsatsanis, Katherine D.; Dartnall, Nancy; Cicchetti, Domenic; Sparrow, Sara S.; Klin, Ami; Volkmar, Fred R.

    2003-01-01

    The concurrent validity of the original and revised versions of the Leiter International Performance Scale was examined with 26 children (ages 4-16) with autism. Although the correlation between the two tests was high (.87), there were significant intra-individual discrepancies present in 10 cases, two of which were both large and clinically…

  1. The Impact of Ionospheric Disturbances on High Accuracy Positioning in Brazil

    Science.gov (United States)

    Yang, L.; Park, J.; Susnik, A.; Aquino, M. H.; Dodson, A.

    2013-12-01

    High positioning accuracy is a key requirement to a number of applications with a high economic impact, such as precision agriculture, surveying, geodesy, land management, off-shore operations. Global Navigation Satellite Systems (GNSS) carrier phase measurement based techniques, such as Real Time Kinematic (RTK), Network-RTK (NRTK) and Precise Point Positioning (PPP), have played an important role in providing centimetre-level positioning accuracy, and become the core of the above applications. However these techniques are especially sensitive to ionospheric perturbations, in particular scintillation. Brazil sits in one of the most affected regions of the Earth and can be regarded as a test-bed for scenarios of the severe ionospheric condition. Over the Brazilian territory, the ionosphere behaves in a considerably unpredictable way and scintillation activity is very prominent, occurring especially after sunset hours. NRTK services may not be able to provide satisfactory accuracy, or even continuous positioning during strong scintillation periods. CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil) started in late 2012 and is a project funded by the GSA (European GNSS Agency) and the European Commission under the Framework Program 7 to deliver improvements on carrier phase based high accuracy algorithms and their implementation in GNSS receivers, aiming to counter the adverse ionospheric effects over Brazil. As the first stage of this project, the ionospheric disturbances, which affect the applications of RTK, NRTK or PPP, are characterized. Typical problems include degraded positioning accuracy, difficulties in ambiguity fixing, NRTK network interpolation errors, long PPP convergence time etc. It will identify how GNSS observables and existing algorithms are degraded by ionosphere related phenomena, evaluating the impact on positioning techniques in terms of accuracy, integrity and availability. Through the

  2. High dimensional multiclass classification with applications to cancer diagnosis

    DEFF Research Database (Denmark)

    Vincent, Martin

    Probabilistic classifiers are introduced and it is shown that the only regular linear probabilistic classifier with convex risk is multinomial regression. Penalized empirical risk minimization is introduced and used to construct supervised learning methods for probabilistic classifiers. A sparse...... and a simulation based domain adaption strategy is presented. It is shown that the presented computational contamination approach drastically improves the primary tumor site classification of lever contaminated biopsies of metastases. A final classifier for identification of the primary tumor site is developed...

  3. Hybrid head-tracker being examined for the high-accuracy attack rotorcraft market

    Science.gov (United States)

    Blanton, Buddy

    2002-08-01

    The need for the helmet-mounted display (HMD) to present flight, navigation, and weapon information in the pilot's line-of-sight has continued to rise as helicopter missions increase in complexity. To obtain spatial correlation of the direction of the head line-of-sight and pilotage imagery generated from helicopter-mounted sensors, it is necessary to slave the sensors to the head motion. To accomplish this task, a head-tracking system (HTS) must be incorporated into the HMD. There are a variety of techniques that could be applied for locating the position and attitude of a helmet-mounted display. Regardless of the technology, an HTS must provide defined measurements of accuracy. System parameters include motion box size, angular range, pointing angle accuracy, pointing angle resolution, update rate, and slew rate. This paper focuses on a hybrid tracker implementation in which a combination of optical and inertial tracking using strap-down gyros is preferred. Specifically, this tracker implementation is being examined for the high-accuracy attack rotorcraft market which requires a high degree of accuracy. The performance and resultant cost of the tracker components are determined by the specific needs of the intended application. The paper will also indicate how the various requirements drive the cost, configuration, and performance of the resultant hybrid head-tracker.

  4. Literature survey of high-impact journals revealed reporting weaknesses in abstracts of diagnostic accuracy studies.

    Science.gov (United States)

    Korevaar, Daniël A; Cohen, Jérémie F; Hooft, Lotty; Bossuyt, Patrick M M

    2015-06-01

    Informative journal abstracts are crucial for the identification and initial appraisal of studies. We aimed to evaluate the informativeness of abstracts of diagnostic accuracy studies. PubMed was searched for reports of studies that had evaluated the diagnostic accuracy of a test against a clinical reference standard, published in 12 high-impact journals in 2012. Two reviewers independently evaluated the information contained in included abstracts using 21 items deemed important based on published guidance for adequate reporting and study quality assessment. We included 103 abstracts. Crucial information on study population, setting, patient sampling, and blinding as well as confidence intervals around accuracy estimates were reported in items per abstract was 10.1 of 21 (standard deviation 2.2). The mean number of reported items was significantly lower for multiple-gate (case-control type) studies, in reports in specialty journals, and for studies with smaller sample sizes and lower abstract word counts. No significant differences were found between studies evaluating different types of tests. Many abstracts of diagnostic accuracy study reports in high-impact journals are insufficiently informative. Developing guidelines for such abstracts could help the transparency and completeness of reporting. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. About accuracy of the discrimination parameter estimation for the dual high-energy method

    Science.gov (United States)

    Osipov, S. P.; Chakhlov, S. V.; Osipov, O. S.; Shtein, A. M.; Strugovtsev, D. V.

    2015-04-01

    A set of the mathematical formulas to estimate the accuracy of discrimination parameters for two implementations of the dual high energy method - by the effective atomic number and by the level lines is given. The hardware parameters which influenced on the accuracy of the discrimination parameters are stated. The recommendations to form the structure of the high energy X-ray radiation impulses are formulated. To prove the applicability of the proposed procedure there were calculated the statistical errors of the discrimination parameters for the cargo inspection system of the Tomsk polytechnic university on base of the portable betatron MIB-9. The comparison of the experimental estimations and the theoretical ones of the discrimination parameter errors was carried out. It proved the practical applicability of the algorithm to estimate the discrimination parameter errors for the dual high energy method.

  6. High accuracy digital aging monitor based on PLL-VCO circuit

    Science.gov (United States)

    Yuejun, Zhang; Zhidi, Jiang; Pengjun, Wang; Xuelong, Zhang

    2015-01-01

    As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm2. After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%.

  7. High accuracy acoustic relative humidity measurement in duct flow with air.

    Science.gov (United States)

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  8. Evaluation of Tests of Perceptual Speed/Accuracy and Spatial Ability for Use in Military Occupational Classification

    Science.gov (United States)

    2014-08-22

    mainly measured g (e.g., Keith , Kranzler, & Flanagan, 2001; Stauffer, Ree, & Carretta, 1996). One explana- tion for the high degree of overlap in what is...Washington, DC: American Educational Research Association. Anderson, L. M., Hoffman, R., Tate, B., Jenkins , J., Parish, C., Stachowski, A., & Dressel, J. D...Behavioral and Social Sci- ences. Keith , T. Z., Kranzler, J. H., & Flanagan, D. P. (2001). What does the Cognitive Assessment Sys- tem (CAS) measure

  9. Research on the accuracy of TM images land-use classification based on QUEST decision tree: A case study of Lijiang in Yunnan%基于QUEST决策树的遥感影像土地利用分类——以云南省丽江市为例

    Institute of Scientific and Technical Information of China (English)

    吴健生; 潘况; 彭建; 黄秀兰

    2012-01-01

    The accuracy of research on land use/cover change (LUCC) is determined directly by the accuracy of land use classification derived from aerial and satellite images. In analysis of the factors of accuracy of current remote sensing image classification, some methods were introduced to study new trends of classification modes. Some previous studies showed that the speed and accuracy of QUEST (Quick, Unbiased, and Efficient Statistical Tree) decision tree classification were superior to those of other decision tree classifications. On the basis of this approach, the research classified the Landsat TM-5 images in Lijiang, Yunnan province. This paper compared the result with that of maximum likelihood image classification. The overall accuracy was 90. 086 %, which was higher than the overall accuracy (85. 965%) of CART (Classification And Regression Tree). Meanwhile, the Kappa efficient was 0. 849, which was higher than the Kappa efficient (0. 760) of CART. Therefore, it is concluded that in the complex terrain area such as in mountainous regions, the choice of QUEST decision tree classification on TM image would improve the accuracy of land use classification. This type of classification decision tree can precisely obtain new classification rules from integrated satellite images, land use thematic maps, DEM maps and other field investigation materials. Simultaneously, the method can also help users to find new classification rules in multidimensional information, and to build decision tree classifier models. Furthermore, the methods, including a large number of high-resolution and hyperspectral image data, integrated multi-sensor platform, multi-temporal remote sensing image, the pattern recognition and data mining of spectral and texture features, and auxiliary geographic data, will become a trend.%土地利用分类精度直接决定土地利用/土地覆被变化相关研究的准确性,而基于决策树的遥感影像分类是近年来提高土地利用分类

  10. Bayesian Information Criterion Based Feature Filtering for the Fusion of Multiple Features in High-Spatial-Resolution Satellite Scene Classification

    Directory of Open Access Journals (Sweden)

    Da Lin

    2015-01-01

    Full Text Available This paper presents a novel classification method for high-spatial-resolution satellite scene classification introducing Bayesian information criterion (BIC-based feature filtering process to further eliminate opaque and redundant information between multiple features. Firstly, two diverse and complementary feature descriptors are extracted to characterize the satellite scene. Then, sparse canonical correlation analysis (SCCA with penalty function is employed to fuse the extracted feature descriptors and remove the ambiguities and redundancies between them simultaneously. After that, a two-phase Bayesian information criterion (BIC-based feature filtering process is designed to further filter out redundant information. In the first phase, we gradually impose a constraint via an iterative process to set a constraint on the loadings for averting sparse correlation descending below to a lower confidence limit of the approximated canonical correlation. In the second phase, Bayesian information criterion (BIC is utilized to conduct the feature filtering which sets the smallest loading in absolute value to zero in each iteration for all features. Lastly, a support vector machine with pyramid match kernel is applied to obtain the final result. Experimental results on high-spatial-resolution satellite scenes demonstrate that the suggested approach achieves satisfactory performance in classification accuracy.

  11. High Mass Accuracy and High Mass Resolving Power FT-ICR Secondary Ion Mass Spectrometry for Biological Tissue Imaging

    CERN Document Server

    Smith, Donald F; Leach, Franklin E; Robinson, Errol W; Paša-Tolić, Ljiljana; Heeren, Ron M A

    2013-01-01

    Biological tissue imaging by secondary ion mass spectrometry has seen rapid development with the commercial availability of polyatomic primary ion sources. Endogenous lipids and other small bio-molecules can now be routinely mapped on the sub-micrometer scale. Such experiments are typically performed on time-of-flight mass spectrometers for high sensitivity and high repetition rate imaging. However, such mass analyzers lack the mass resolving power to ensure separation of isobaric ions and the mass accuracy for elemental formula assignment based on exact mass measurement. We have recently reported a secondary ion mass spectrometer with the combination of a C60 primary ion gun with a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS) for high mass resolving power, high mass measurement accuracy and tandem mass spectrometry capabilities. In this work, high specificity and high sensitivity secondary ion FT-ICR MS was applied to chemical imaging of biological tissue. An entire rat brain tissu...

  12. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    Energy Technology Data Exchange (ETDEWEB)

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R. (Sandia National Labs., Albuquerque, NM (USA); National Inst. of Standards and Technology, Gaithersburg, MD (USA))

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  13. RNA secondary structure modeling at consistent high accuracy using differential SHAPE.

    Science.gov (United States)

    Rice, Greggory M; Leonard, Christopher W; Weeks, Kevin M

    2014-06-01

    RNA secondary structure modeling is a challenging problem, and recent successes have raised the standards for accuracy, consistency, and tractability. Large increases in accuracy have been achieved by including data on reactivity toward chemical probes: Incorporation of 1M7 SHAPE reactivity data into an mfold-class algorithm results in median accuracies for base pair prediction that exceed 90%. However, a few RNA structures are modeled with significantly lower accuracy. Here, we show that incorporating differential reactivities from the NMIA and 1M6 reagents--which detect noncanonical and tertiary interactions--into prediction algorithms results in highly accurate secondary structure models for RNAs that were previously shown to be difficult to model. For these RNAs, 93% of accepted canonical base pairs were recovered in SHAPE-directed models. Discrepancies between accepted and modeled structures were small and appear to reflect genuine structural differences. Three-reagent SHAPE-directed modeling scales concisely to structurally complex RNAs to resolve the in-solution secondary structure analysis problem for many classes of RNA.

  14. A fast and high accuracy numerical simulation algorithm of the polymer spherulite at the mesoscale Level

    Science.gov (United States)

    Liu, Yongzhi; Geng, Tie; (Tom Turng, Lih-Sheng; Liu, Chuntai; Cao, Wei; Shen, Changyu

    2017-09-01

    In the multiscale numerical simulation of polymer crystallization during the processing period, flow and temperature of the polymer melt are simulated on the macroscale level, while nucleation and growth of the spherulite are simulated at the mesoscale level. As a part of the multiscale simulation, the meso-simulation requires a fast solving speed because the meso-simulation software must be run several times in every macro-element at each macro-step. Meanwhile, the accuracy of the calculation results is also very important. It is known that the simulation geometry of crystallization includes planar (2D) and three-dimensional space (3D). The 3D calculations are more accurate but more expensive because of the long CPU time consumed. On the contrary, 2D calculations are always much faster but lower in accuracy. To reach the desirable speed and high accuracy at the same time, an algorithm is presented, in which the Delesse law coupled with the Monte Carlo method and pixel method are employed to simulate the nucleation, growth, and impingement of the polymer spherulite at the mesoscale level. Based on this algorithm, a software is developed with the Visual C++ language, and its numerical examples’ results prove that the solving speed of this algorithm is as fast as the 2D classical simulation and the calculation accuracy is at the same level as the 3D simulation.

  15. Concurrent validity and classification accuracy of the Leiter and Leiter-R in low-functioning children with autism.

    Science.gov (United States)

    Tsatsanis, Katherine D; Dartnall, Nancy; Cicchetti, Domenic; Sparrow, Sara S; Klin, Ami; Volkmar, Fred R

    2003-02-01

    The concurrent validity of the Leiter International Performance Scale (Leiter) and Leiter International Performance Scale-Revised (Leiter-R) was examined in a sample of children with autism who could not be assessed with more traditional measures of intelligence (e.g., the Wechsler scales). The sample consisted of 26 children ranging in age from 4 to 16 years. The correlation between the Leiter scales was high (r = .87), and there was a difference of 3.7 points between the two mean scores, nonsignificant at both statistical and clinical levels. However, significant intraindividual discrepancies were present in 10 cases, 2 of which were both large (24 and 36 points) and clinically meaningful. The mean profile of performance on Leiter-R subtests is also presented for this sample of children with autism, to allow for comparison with other groups. Based on the results of this initial evaluation, together with the current normative data, good psychometric properties, and availability of global and subtest scores with the Leiter-R, the instrument is generally recommended for use with children with autism. However, because of changes in the design of the Leiter-R, there may be greater clinical success with the original Leiter for those children who are very low functioning and severely affected, particularly younger children.

  16. High Accuracy Gravitational Waveforms from Black Hole Binary Inspirals Using OpenCL

    CERN Document Server

    McKennon, Justin; Khanna, Gaurav

    2012-01-01

    There is a strong need for high-accuracy and efficient modeling of extreme-mass-ratio binary black hole systems because these are strong sources of gravitational waves that would be detected by future observatories. In this article, we present sample results from our Teukolsky EMRI code: a time-domain Teukolsky equation solver (a linear, hyperbolic, partial differential equation solver using finite-differencing), that takes advantage of several mathematical and computational enhancements to efficiently generate long-duration and high-accuracy EMRI waveforms. We emphasize here the computational advances made in the context of this code. Currently there is considerable interest in making use of many-core processor architectures, such as Nvidia and AMD graphics processing units (GPUs) for scientific computing. Our code uses the Open Computing Language (OpenCL) for taking advantage of the massive parallelism offered by modern GPU architectures. We present the performance of our Teukolsky EMRI code on multiple mod...

  17. Full hierarchic versus non-hierarchic classification approaches for mapping sealed surfaces at the rural-urban fringe using high-resolution satellite data.

    Science.gov (United States)

    De Roeck, Tim; Van de Voorde, Tim; Canters, Frank

    2009-01-01

    Since 2008 more than half of the world population is living in cities and urban sprawl is continuing. Because of these developments, the mapping and monitoring of urban environments and their surroundings is becoming increasingly important. In this study two object-oriented approaches for high-resolution mapping of sealed surfaces are compared: a standard non-hierarchic approach and a full hierarchic approach using both multi-layer perceptrons and decision trees as learning algorithms. Both methods outperform the standard nearest neighbour classifier, which is used as a benchmark scenario. For the multi-layer perceptron approach, applying a hierarchic classification strategy substantially increases the accuracy of the classification. For the decision tree approach a one-against-all hierarchic classification strategy does not lead to an improvement of classification accuracy compared to the standard all-against-all approach. Best results are obtained with the hierarchic multi-layer perceptron classification strategy, producing a kappa value of 0.77. A simple shadow reclassification procedure based on characteristics of neighbouring objects further increases the kappa value to 0.84.

  18. Classification of Ultra-High Resolution Orthophotos Combined with DSM Using a Dual Morphological Top Hat Profile

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2015-12-01

    Full Text Available New aerial sensors and platforms (e.g., unmanned aerial vehicles (UAVs are capable of providing ultra-high resolution remote sensing data (less than a 30-cm ground sampling distance (GSD. This type of data is an important source for interpreting sub-building level objects; however, it has not yet been explored. The large-scale differences of urban objects, the high spectral variability and the large perspective effect bring difficulties to the design of descriptive features. Therefore, features representing the spatial information of the objects are essential for dealing with the spectral ambiguity. In this paper, we proposed a dual morphology top-hat profile (DMTHP using both morphology reconstruction and erosion with different granularities. Due to the high dimensional feature space, we have proposed an adaptive scale selection procedure to reduce the feature dimension according to the training samples. The DMTHP is extracted from both images and Digital Surface Models (DSM to obtain complimentary information. The random forest classifier is used to classify the features hierarchically. Quantitative experimental results on aerial images with 9-cm and UAV images with 5-cm GSD are performed. Under our experiments, improvements of 10% and 2% in overall accuracy are obtained in comparison with the well-known differential morphological profile (DMP feature, and superior performance is observed over other tested features. Large format data with 20,000 × 20,000 pixels are used to perform a qualitative experiment using the proposed method, which shows its promising potential. The experiments also demonstrate that the DSM information has greatly enhanced the classification accuracy. In the best case in our experiment, it gives rise to a classification accuracy from 63.93% (spectral information only to 94.48% (the proposed method.

  19. Cavity ring-down technique for measurement of reflectivity of high reflectivity mirrors with high accuracy

    Indian Academy of Sciences (India)

    G Sridhar; Sandeep K Agarwalla; Sunita Singh; L M Gantayet

    2010-12-01

    A simple, accurate and reliable method for measuring the reflectivity of laser-grade mirrors ( > 99.5 %) based on cavity ring-down (CRD) technique has been success-fully demonstrated in our laboratory using a pulsed Nd:YAG laser. A fast photomultiplier tube with an oscilloscope was used to detect and analyse the CRD signal. The cavity decay times were measured for three cavities formed by a combination of three mirror pairs. The absolute reflectivities 1, 2, 3 were determined to be 99.94%, 99.63%, 99.52% at normal incidence. The reflectivity of mirrors is measured to an accuracy of 0.01%.

  20. A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System

    Directory of Open Access Journals (Sweden)

    Guanwu Zhou

    2014-07-01

    Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.

  1. High Mass Accuracy and High Mass Resolving Power FT-ICR Secondary Ion Mass Spectrometry for Biological Tissue Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Donald F.; Kiss, Andras; Leach, Franklin E.; Robinson, Errol W.; Pasa-Tolic, Ljiljana; Heeren, Ronald M.

    2013-07-01

    Biological tissue imaging by secondary ion mass spectrometry has seen rapid development with the commercial availability of polyatomic primary ion sources. Endogenous lipids and other small bio-molecules can now be routinely mapped on the micrometer scale. Such experiments are typically performed on time-of-flight mass spectrometers for high sensitivity and high repetition rate imaging. However, such mass analyzers lack the mass resolving power to ensure separation of isobaric ions and the mass accuracy for exact mass elemental formula assignment. We have recently reported a secondary ion mass spectrometer with the combination of a C60 primary ion gun with a Fourier transform ion cyclotron resonance mass spectrometer (FT-ICR MS) for high mass resolving power, high mass measurement accuracy and tandem mass spectrometry capabilities. In this work, high specificity and high sensitivity secondary ion FT-ICR MS was applied to chemical imaging of biological tissue. An entire rat brain tissue was measured with 150 μm spatial resolution (75 μm primary ion spot size) with mass resolving power (m/Δm50%) of 67,500 (at m/z 750) and root-mean-square measurement accuracy less than two parts-per-million for intact phospholipids, small molecules and fragments. For the first time, ultra-high mass resolving power SIMS has been demonstrated, with m/Δm50% > 3,000,000. Higher spatial resolution capabilities of the platform were tested at a spatial resolution of 20 μm. The results represent order of magnitude improvements in mass resolving power and mass measurement accuracy for SIMS imaging and the promise of the platform for ultra-high mass resolving power and high spatial resolution imaging.

  2. Accuracy of GPS devices for measuring high-intensity running in field-based team sports.

    Science.gov (United States)

    Rampinini, E; Alberti, G; Fiorenza, M; Riggio, M; Sassi, R; Borges, T O; Coutts, A J

    2015-01-01

    We compared the accuracy of 2 GPS systems with different sampling rates for the determination of distances covered at high-speed and metabolic power derived from a combination of running speed and acceleration. 8 participants performed 56 bouts of shuttle intermittent running wearing 2 portable GPS devices (SPI-Pro, GPS-5 Hz and MinimaxX, GPS-10 Hz). The GPS systems were compared with a radar system as a criterion measure. The variables investigated were: total distance (TD), high-speed distance (HSR>4.17 m·s(-1)), very high-speed distance (VHSR>5.56 m·s(-1)), mean power (Pmean), high metabolic power (HMP>20 W·kg(-1)) and very high metabolic power (VHMP>25 W·kg(-1)). GPS-5 Hz had low error for TD (2.8%) and Pmean (4.5%), while the errors for the other variables ranged from moderate to high (7.5-23.2%). GPS-10 Hz demonstrated a low error for TD (1.9%), HSR (4.7%), Pmean (2.4%) and HMP (4.5%), whereas the errors for VHSR (10.5%) and VHMP (6.2%) were moderate. In general, GPS accuracy increased with a higher sampling rate, but decreased with increasing speed of movement. Both systems could be used for calculating TD and Pmean, but they cannot be used interchangeably. Only GPS-10 Hz demonstrated a sufficient level of accuracy for quantifying distance covered at higher speeds or time spent at very high power. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Making high-accuracy null depth measurements for the LBTI exozodi survey

    Science.gov (United States)

    Mennesson, Bertrand; Defrère, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.; Marion, Lindsay; Roberge, Aki; Serabyn, Eugene; Skemer, Andy J.; Stapelfeldt, Karl; Weinberger, Alycia J.; Wyatt, Mark

    2016-08-01

    The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 σ measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation ("null") levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.

  4. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  5. High Accuracy Attitude Control System Design for Satellite with Flexible Appendages

    Directory of Open Access Journals (Sweden)

    Wenya Zhou

    2014-01-01

    Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.

  6. High-accuracy determination of the neutron flux at n{sub T}OF

    Energy Technology Data Exchange (ETDEWEB)

    Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)

    2013-12-15

    The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)

  7. Navigation Facility for High Accuracy Offline Trajectory and Attitude Estimation in Airborne Applications

    Directory of Open Access Journals (Sweden)

    A. Renga

    2013-01-01

    Full Text Available The paper focuses on a navigation facility, relying on commercial-off-the-shelf (COTS technology, developed to generate high-accuracy attitude and trajectory measurements in postprocessing. Target performance is cm-level positioning with tenth of degree attitude accuracy. The facility is based on the concept of GPS-aided inertial navigation but comprises carrier-phase differential GPS (CDGPS processing and attitude estimation based on multiantenna GPS configurations. Expected applications of the system include: (a performance assessment of integrated navigation systems, developed for general aviation aircraft and medium size unmanned aircraft systems (UAS; (b generation of reference measurements to evaluate the flight performance of airborne sensors (e.g., radar or laser; and (c generation of reference trajectory and attitude for improving imaging quality of airborne remote sensing data. The paper describes system architecture, selected algorithms for data processing and integration, and theoretical performance evaluation. Experimental results are also presented confirming the effectiveness of the implemented approach.

  8. High-Performance Neural Networks for Visual Object Classification

    CERN Document Server

    Cireşan, Dan C; Masci, Jonathan; Gambardella, Luca M; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.

  9. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  10. Classification of High Blood Pressure Persons Vs Normal Blood Pressure Persons Using Voice Analysis

    Directory of Open Access Journals (Sweden)

    Saloni

    2013-11-01

    Full Text Available The human voice is remarkable, complex and delicate. All parts of the body play some role in voice production and may be responsible for voice dysfunction. The larynx contains muscles that are surrounded by blood vessels connected to circulatory system. The pressure of blood in these vessels should be related with dynamic variation of vocal cord parameters. These parameters are directly related with acoustic properties of speech. Acoustic voice analysis can be used to characterize the pathological voices. This paper presents the classification of high blood pressure and normal with the aid of voice signal recorded from the patients. Various features have been extracted from the voice signal of healthy persons and persons suffering from high blood pressure. Simulation results show differences in the parameter values of healthy and pathological persons. Then an optimum feature vector is prepared and kmean classification algorithm was implemented for data classification. The 79% classification efficiency was obtained.

  11. High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry

    Directory of Open Access Journals (Sweden)

    Guy Jean-Pierre Schumann

    2016-01-01

    Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.

  12. High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry

    Science.gov (United States)

    Schumann, Guy; Moller, Delwyn; Mentgen, Felix

    2015-12-01

    Digital elevation models (DEMs) are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.

  13. SNP-based non-invasive prenatal testing detects sex chromosome aneuploidies with high accuracy

    Science.gov (United States)

    Samango-Sprouse, Carole; Banjevic, Milena; Ryan, Allison; Sigurjonsson, Styrmir; Zimmermann, Bernhard; Hill, Matthew; Hall, Megan P.; Westemeyer, Margaret; Saucier, Jennifer; Demko, Zachary; Rabinowitz, Matthew

    2013-01-01

    Objective To develop a single nucleotide polymorphism- and informatics-based non-invasive prenatal test that detects sex chromosome aneuploidies early in pregnancy. Methods Fifteen aneuploid samples, including thirteen 45,X, two 47,XXY, and one 47,XYY, along with 185 euploid controls, were analyzed. Cell-free DNA was isolated from maternal plasma, amplified in a single multiplex PCR assay that targeted 19,488 polymorphic loci covering chromosomes 13, 18, 21, X, and Y, and sequenced. Sequencing results were analyzed using a Bayesian-based maximum likelihood statistical method to determine copy number of interrogated chromosomes, calculating sample-specific accuracies. Results Of the samples that passed a stringent quality control metric (93%), the algorithm correctly identified copy number at all five chromosomes in all 187 samples, for 934/935 correct calls as early as 9.4 weeks of gestation. We detected 45,X with 91.7% sensitivity (CI: 61.5-99.8%) and 100% specificity (CI: 97.9-100%), and 47,XXY and 47,XYY. The average calculated accuracy was 99.78%. Conclusion This method non-invasively detected 45,X, 47,XXY, and 47,XYY fetuses from cfDNA isolated from maternal plasma with high calculated accuracies, and thus offers a non-invasive method with the potential to function as a routine screen allowing for early prenatal detection of rarely diagnosed yet commonly occurring sex aneuploidies. PMID:23712453

  14. High-accuracy defect sizing for nozzle attachment welds using asymmetric TOFD

    Energy Technology Data Exchange (ETDEWEB)

    Bloodworth, T. [AEA Technology, Risley (United Kingdom)

    1999-09-01

    Inspection procedures for the detection, characterisation and high-accuracy sizing of defects in nozzle attachment welds in a Swedish BWR have been developed. These welds are set-on nozzle-to-pipe attachment welds between the main recirculation pipe and related piping systems. The nozzles and the main recirculation pipe are made of ferritic steel with austenitic stainless steel cladding on the inner surface. The overall wall thickness of the nozzle is 30 mm. The inspection uses an automated pulse-echo technique for the detection and length sizing of defects. Software for the display of complex geometry ultrasonic data is used to assist in data analysis. An unorthodox automated ultrasonic TOFD technique is used to measure the through-wall height of defects. This technique deploys probes on both the nozzle and main pipe surfaces. The TOFD data for this complex geometry are analysed using the CGTOFD software, to locate the origin of defect edge signals. The Qualification detection criterion for this inspection is the detection of defects 6 mm x 18 mm (height x length) or greater. The required length measurement accuracy is {+-}14 mm and the required through-wall height measurement accuracy is {+-}2.3 mm. This last requirement is very demanding. The inspection procedures for detection and sizing passed Procedure Qualification when measured against the above criteria on an `open` test specimen. Data collection and analysis personnel have subsequently passed Personnel Qualification using `blind` specimens. (Author)

  15. Uncertainty and target accuracy studies for the very high temperature reactor(VHTR) physics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Taiwo, T. A.; Palmiotti, G.; Aliberti, G.; Salvatores, M.; Kim, T.K.

    2005-09-16

    The potential impact of nuclear data uncertainties on a number of performance parameters (core and fuel cycle) of the prismatic block-type Very High Temperature Reactor (VHTR) has been evaluated and results are presented in this report. An uncertainty analysis has been performed, based on sensitivity theory, which underlines what cross-sections, what energy range and what isotopes are responsible for the most significant uncertainties. In order to give guidelines on priorities for new evaluations or validation experiments, required accuracies on specific nuclear data have been derived, accounting for target accuracies on major design parameters. Results of an extensive analysis indicate only a limited number of relevant parameters do not meet the target accuracies assumed in this work; this does not imply that the existing nuclear cross-section data cannot be used for the feasibility and pre-conceptual assessments of the VHTR. However, the results obtained depend on the uncertainty data used, and it is suggested to focus some future evaluation work on the production of consistent, as far as possible complete and user oriented covariance data.

  16. High accuracy measurements of magnetic field integrals for the european XFEL undulator systems

    Science.gov (United States)

    Wolff-Fabris, Frederik; Viehweger, Marc; Li, Yuhui; Pflüger, Joachim

    2016-10-01

    Two high accuracy moving wire (MW) measurement systems based on stretched wire technique were built for the European XFEL (XFEL.EU). They were dedicated to monitor, tune and improve the magnetic field integrals properties during the serial production of the undulator segments, phase shifters and air coil correctors for XFEL.EU. For the magnetic tuning of phase shifters and the calibration of the air coils correctors a short portable MW measurement bench was built to measure first field integrals in short devices with magnetic length of less than about 300 mm and with an ultimate accuracy much better than 1 G cm (0.001 T mm). A long MW measurement setup was dedicated to obtain the total first and second field integrals on the 5-meters long undulator segments with accuracy of about 4 G cm (0.004 T mm) and 2000 G cm2 (20 T mm2) for the 1st and 2nd field integrals, respectively. Using these data a method was developed to compute the proper corrections for the air coils correctors used at both extremities so that zero first and second field integrals for an undulator segment are obtained. It is demonstrated that charging air coils correctors with these corrections results in near zero effect to the electron trajectory in the undulator systems and consequently no negative impact on the self-amplified spontaneous emission (SASE) process should occur.

  17. CLASSIFICATION OF ORTHOGNATHIC SURGERY PATIENTS INTO LOW AND HIGH BLEEDING RISK GROUPS USING THROMBELASTOGRAPHY

    DEFF Research Database (Denmark)

    Elenius Madsen, Daniel

    2012-01-01

    Title: CLASSIFICATION OF ORTHOGNATHIC SURGERY PATIENTS INTO LOW AND HIGH BLEEDING RISK GROUPS USING THROMBELASTOGRAPHY Objectives: Orthognathic surgery involves surgical manipulation of jaw and face skeletal structure. A subgroup of patients undergoing orthognathic surgery suffers from excessive...... intraoperative blood loss. Classification of patients according to their bleeding risk will improve the surgical procedure with regard to staff composition, blood transfusion and patient safety. Thrombelastography is a global coagulation assay measuring the viscoelastic properties of whole blood samples, taking...

  18. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    Science.gov (United States)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  19. A High-Performance Operational Amplifier for High-Speed High-Accuracy Switch-Capacitor Cells

    Institute of Scientific and Technical Information of China (English)

    Qi Fan; Ning Ning; Qi Yu; Da Chen

    2007-01-01

    A highspeed highaccuracy fully differenttial operational amplifier (opamp) is realized based on noMillercapacitor feedforward (NMCF) compensation scheme. In order to achieve a good phase margin, the NMCF compensation scheme uses the positive phase shift of lefthalfplane (LHP) zero caused by the feedforward path to counteract the negative phase shift of the nondominant pole. Compared to traditional Miller compensation method, the opamp obtains high gain and wide band synchronously without the polesplitting effect while saves significant chip area due to the absence of the Miller capacitor. Simulated by the 0.35 μm CMOS RF technology, the result shows that the openloop gain of the opamp is 118 dB with the unity gainbandwidth (UGBW)of 1 GHz, and the phase margin is 61°while the settling time is 5.8 ns when achieving 0.01% accuracy. The opamp is especially suitable for the frontend sample/hold (S/H)cell and the multiplying D/A converter(MDAC) module of the highspeed highresolution pipelined A/D converters(ADCs).

  20. Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-08-03

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.

  1. Expression of CRM1 and CDK5 shows high prognostic accuracy for gastric cancer

    Science.gov (United States)

    Sun, Yu-Qin; Xie, Jian-Wei; Xie, Hong-Teng; Chen, Peng-Chen; Zhang, Xiu-Li; Zheng, Chao-Hui; Li, Ping; Wang, Jia-Bin; Lin, Jian-Xian; Cao, Long-Long; Huang, Chang-Ming; Lin, Yao

    2017-01-01

    AIM To evaluate the predictive value of the expression of chromosomal maintenance (CRM)1 and cyclin-dependent kinase (CDK)5 in gastric cancer (GC) patients after gastrectomy. METHODS A total of 240 GC patients who received standard gastrectomy were enrolled in the study. The expression level of CRM1 and CDK5 was detected by immunohistochemistry. The correlations between CRM1 and CDK5 expression and clinicopathological factors were explored. Univariate and multivariate survival analyses were used to identify prognostic factors for GC. Receiver operating characteristic analysis was used to compare the accuracy of the prediction of clinical outcome by the parameters. RESULTS The expression of CRM1 was significantly related to size of primary tumor (P = 0.005), Borrmann type (P = 0.006), degree of differentiation (P = 0.004), depth of invasion (P = 0.008), lymph node metastasis (P = 0.013), TNM stage (P = 0.002) and distant metastasis (P = 0.015). The expression of CDK5 was significantly related to sex (P = 0.048) and Lauren’s classification (P = 0.011). Multivariate Cox regression analysis identified that CRM1 and CDK5 co-expression status was an independent prognostic factor for overall survival (OS) of patients with GC. Integration of CRM1 and CDK5 expression could provide additional prognostic value for OS compared with CRM1 or CDK5 expression alone (P = 0.001). CONCLUSION CRM1 and CDK5 co-expression was an independent prognostic factors for GC. Combined CRM1 and CDK5 expression could provide a prognostic model for OS of GC. PMID:28373767

  2. High Accuracy, Two-Dimensional Read-Out in Multiwire Proportional Chambers

    Science.gov (United States)

    Charpak, G.; Sauli, F.

    1973-02-14

    In most applications of proportional chambers, especially in high-energy physics, separate chambers are used for measuring different coordinates. In general one coordinate is obtained by recording the pulses from the anode wires around which avalanches have grown. Several methods have been imagined for obtaining the position of an avalanche along a wire. In this article a method is proposed which leads to the same range of accuracies and may be preferred in some cases. The problem of accurate measurements for large-size chamber is also discussed.

  3. High-accuracy thickness measurement of a transparent plate with the heterodyne central fringe identification technique

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wang-Tsung; Hsieh, Hung-Chih; Chang, Wei-Yao; Chen, Yen-Liang; Su, Der-Chin

    2011-07-20

    In a modified Twyman-Green interferometer, the optical path variation is measured with the heterodyne central fringe identification technique, as the light beam is focused by a displaced microscopic objective on the front/rear surface of the test transparent plate. The optical path length variation is then measured similarly after the test plate is removed. The geometrical thickness of the test plate can be calculated under the consideration of dispersion effect. This method has a wide measurable range and a high accuracy in the measurable range.

  4. A small and high accuracy gyro stabilization electro-optical platform

    Science.gov (United States)

    Qiu, Haitao; Han, Yonggen; Lv, Yanhong

    2008-10-01

    A high accuracy line-of-sight (LOS) Stabilization system based on digital control technology was designed. The current feedback closed-loop system was introduced which uses the CCD graphic and resolver to constitute the position closed-loop and uses the optic fiber gyro to constitute the rate closed-loop. In order to realize zero steady-state error of angular output in counteracting disturbance from carrier, a PII2 (proportional-integral-double integral) control scheme is proposed. The hardware configuration and software system is presented. Experimental results show that the system has perfect dynamic and static performance and the technical requirements were satisfied.

  5. High Accuracy Three-dimensional Simulation of Micro Injection Moulded Parts

    DEFF Research Database (Denmark)

    Tosello, Guido; Costa, F. S.; Hansen, Hans Nørgaard

    2011-01-01

    Micro injection moulding (μIM) is the key replication technology for high precision manufacturing of polymer micro products. Data analysis and simulations on micro-moulding experiments have been conducted during the present validation study. Detailed information about the μIM process was gathered...... and used to establish a reliable simulation methodology suitable for μIM parts. Various Simulation set-up parameters that have been considered in order to improve the simulation accuracy: injection speed profile, melt and mould temperatures, 3D mesh, material rheology, inertia effect and shrinkage...

  6. High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer

    CERN Multimedia

    2002-01-01

    The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \

  7. An angle encoder for super-high resolution and super-high accuracy using SelfA

    Science.gov (United States)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-06-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after

  8. Enhanced Urban Landcover Classification for Operational Change Detection Study Using Very High Resolution Remote Sensing Data

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This study presents an operational case of advancements in urban land cover classification and change detection by using very high resolution spatial and multispectral information from 4-band QuickBird (QB) and 8-band WorldView-2 (WV-2) image sequence. Our study accentuates quantitative, pixel based, image difference approach for operational change detection using very high resolution pansharpened QB and WV-2 images captured over San Francisco city, California, USA (37° 44" 30N', 122° 31" 30' W and 37° 41" 30° N ,122° 20" 30' W). In addition to standard QB image, we compiled three multiband images from eight pansharpened WV-2 bands: (1) multiband image from four traditional spectral bands, i.e., Blue, Green, Red and near-infrared 1 (NIR1) (henceforth referred as "QB equivalent WV-2"), (2) multiband image from four new spectral bands, i.e., Coastal, Yellow, Red Edge and NIR2 (henceforth referred as "new band WV-2"), and (3) multiband image consisting of four traditional and four new bands (henceforth referred as "standard WV-2"). All the four multiband images were classified using support vector machine (SVM) classifier into four most abundant land cover classes, viz, hard surface, vegetation, water and shadow. The assessment of classification accuracy was performed using random selection of 356 test points. Land cover classifications on "standard QB" image (kappa coeffiecient, κ = 0.93), "QB equivalent WV-2" image (κ = 0.97), and "new band WV-2" image (κ = 0.97) yielded overall accuracies of 96.31 %, 98.03 % and 98.31 %, respectively, while "standard WV-2" image (κ = 0.99) yielded an improved overall accuracy of 99.18 %. It is concluded that the addition of four new spectral bands to the existing four traditional bands improved the discrimination of land cover targets, due to increase in the spectral characteristics of WV-2 satellite. Consequently, to test the validity of improvement in classification process for implementation in operational change

  9. High-accuracy current sensing circuit with current compensation technique for buck-boost converter

    Science.gov (United States)

    Rao, Yuan; Deng, Wan-Ling; Huang, Jun-Kai

    2015-03-01

    A novel on-chip current sensing circuit with current compensation technique suitable for buck-boost converter is presented in this article. The proposed technique can sense the full-range inductor current with high accuracy and high speed. It is mainly based on matched current mirror and does not require a large proportion of aspect ratio between the powerFET and the senseFET, thus it reduces the complexity of circuit design and the layout mismatch issue without decreasing the power efficiency. The circuit is fabricated with TSMC 0.25 µm 2P5M mixed-signal process. Simulation results show that the buck-boost converter can be operated at 200 kHz to 4 MHz switching frequency with an input voltage from 2.8 to 4.7 V. The output voltage is 3.6 V, and the maximum accuracy for both high and low side sensing current reaches 99% within the load current ranging from 200 to 600 mA.

  10. Simple high-accuracy resolution program for convective modelling of discontinuities

    Science.gov (United States)

    Leonard, B. P.

    1988-01-01

    For steady multidimensional convection, the Quadratic Upstream Interpolation for Convective Kinematics (QUICK) scheme has several attractive properties. However, for highly convective simulation of step profiles, QUICK produces unphysical overshoots and a few oscillations, and this may cause serious problems in nonlinear flows. Fortunately, it is possible to modify the convective flux by writing the normalized convected control-volume face value as a function of the normalized adjacent upstream node value, developing criteria for monotonic resolution without sacrificing formal accuracy. This results in a nonlinear functional relationship between the normalized variables, whereas standard methods are all linear in this sense. The resulting Simple High Accuracy Resolution Program (SHARP) can be applied to steady multidimensional flows containing thin shear or mixing layers, shock waves, and other frontal phenomena. This represents a significant advance in modeling highly convective flows of engineering and geophysical importance. SHARP is based on an explicit, conservative, control-volume flux formation, equally applicable to one, two, or three dimensional elliptic, parabolic, hyperbolic, or mixed-flow regimes. Results are given for the bench-mark purely convective first-order results and the nonmonotonic predictions of second- and third-order upwinding.

  11. High-accuracy optimal finite-thrust trajectories for Moon escape

    Science.gov (United States)

    Shen, Hong-Xin; Casalino, Lorenzo

    2017-02-01

    The optimization problem of fuel-optimal trajectories from a low circular Moon orbit to a target hyperbolic excess velocity vector using finite-thrust propulsion is solved. The ability to obtain the most accurate satisfaction of necessary optimality conditions in a high-accuracy dynamic model is the main motivation of the current study. The solutions allow attaining anytime-return Earth-interface conditions from a low lunar orbit. Gravitational effects of the Sun, Earth, and Moon are included throughout the entire trajectory. Severe constraints on the fuel budget combined with high-accuracy demands on the endpoint conditions necessitate a high-fidelity solution to the trajectory optimization problem and JPL DE405 ephemeris model is used to determine the perturbing bodies' positions. The optimization problem is solved using an indirect method. The optimality of the solution is verified by an application of Pontryagin's maximum principle. More accurate and fuel-efficient trajectories are found for the same mission objectives and constraints published in other research, emphasizing the advantages of this technique. It is also shown that the thrust structure consists of three finite burns. In contrast to previous research, no singular arc is required in the optimal solutions, and all the controls appear bang-bang.

  12. Emergency positioning system accuracy with infrared LEDs in high-security facilities

    Science.gov (United States)

    Knoch, Sierra N.; Nelson, Charles; Walker, Owens

    2017-05-01

    Instantaneous personnel location presents a challenge in Department of Defense applications where high levels of security restrict real-time tracking of crew members. During emergency situations, command and control requires immediate accountability of all personnel. Current radio frequency (RF) based indoor positioning systems can be unsuitable due to RF leakage and electromagnetic interference with sensitively calibrated machinery on variable platforms like ships, submarines and high-security facilities. Infrared light provide a possible solution to this problem. This paper proposes and evaluates an indoor line-of-sight positioning system that is comprised of IR and high-sensitivity CMOS camera receivers. In this system the movement of the LEDs is captured by the camera, uploaded and analyzed; the highest point of power is located and plotted to create a blueprint of crewmember location. Results provided evaluate accuracy as a function of both wavelength and environmental conditions. Research will further evaluate the accuracy of the LED transmitter and CMOS camera receiver system. Transmissions in both the 780 and 850nm IR are analyzed.

  13. Machine Learning Approaches to Classification of Seafloor Features from High Resolution Sonar Data

    Science.gov (United States)

    Smith, D. G.; Ed, L.; Sofge, D.; Elmore, P. A.; Petry, F.

    2014-12-01

    Navigation charts provide topographic maps of the seafloor created from swaths of sonar data. Converting sonar data to a topographic map is a manual, labor-intensive process that can be greatly assisted by contextual information obtained from automated classification of geomorphological structures. Finding structures such as seamounts can be challenging, as there are no established rules that can be used for decision-making. Often times, it is a determination that is made by human expertise. A variety of feature metrics may be useful for this task and we use a large number of metrics relevant to the task of finding seamounts. We demonstrate this ability in locating seamounts by two related machine learning techniques. As well as achieving good accuracy in classification, the human-understandable set of metrics that are most important for the results are discussed.

  14. High accuracy genotyping directly from genomic DNA using a rolling circle amplification based assay

    Directory of Open Access Journals (Sweden)

    Du Yuefen

    2003-05-01

    Full Text Available Abstract Background Rolling circle amplification of ligated probes is a simple and sensitive means for genotyping directly from genomic DNA. SNPs and mutations are interrogated with open circle probes (OCP that can be circularized by DNA ligase when the probe matches the genotype. An amplified detection signal is generated by exponential rolling circle amplification (ERCA of the circularized probe. The low cost and scalability of ligation/ERCA genotyping makes it ideally suited for automated, high throughput methods. Results A retrospective study using human genomic DNA samples of known genotype was performed for four different clinically relevant mutations: Factor V Leiden, Factor II prothrombin, and two hemochromatosis mutations, C282Y and H63D. Greater than 99% accuracy was obtained genotyping genomic DNA samples from hundreds of different individuals. The combined process of ligation/ERCA was performed in a single tube and produced fluorescent signal directly from genomic DNA in less than an hour. In each assay, the probes for both normal and mutant alleles were combined in a single reaction. Multiple ERCA primers combined with a quenched-peptide nucleic acid (Q-PNA fluorescent detection system greatly accellerated the appearance of signal. Probes designed with hairpin structures reduced misamplification. Genotyping accuracy was identical from either purified genomic DNA or genomic DNA generated using whole genome amplification (WGA. Fluorescent signal output was measured in real time and as an end point. Conclusions Combining the optimal elements for ligation/ERCA genotyping has resulted in a highly accurate single tube assay for genotyping directly from genomic DNA samples. Accuracy exceeded 99 % for four probe sets targeting clinically relevant mutations. No genotypes were called incorrectly using either genomic DNA or whole genome amplified sample.

  15. Discovery and validation of urine markers of acute pediatric appendicitis using high accuracy mass spectrometry

    Science.gov (United States)

    Kentsis, Alex; Lin, Yin Yin; Kurek, Kyle; Calicchio, Monica; Wang, Yan Yan; Monigatti, Flavio; Campagne, Fabien; Lee, Richard; Horwitz, Bruce; Steen, Hanno; Bachur, Richard

    2015-01-01

    Study Objective Molecular definition of disease has been changing all aspects of medical practice, from diagnosis and screening to understanding and treatment. Acute appendicitis is among many human conditions that are complicated by the heterogeneity of clinical presentation and shortage of diagnostic markers. Here, we sought to profile the urine of patients with appendicitis with the goal of identifying new diagnostic markers. Methods Candidate markers were identified from the urine of children with histologically proven appendicitis by using high accuracy mass spectrometry proteome profiling. These systemic and local markers were used to assess the probability of appendicitis in a blinded, prospective study of children being evaluated for acute abdominal pain in our emergency department. Tests of performance of the markers were evaluated against the pathologic diagnosis and histologic grade of appendicitis. Results Test performance of 57 identified candidate markers was studied in 67 patients, with median age of 11 years, 37% of whom had appendicitis. Several exhibited favorable diagnostic performance, including calgranulin A (S100-A8), α-1-acid glycoprotein 1 (orosomucoid), and leucine-rich α-2-glycoprotein (LRG), with the ROC AUC and values of 0.84 (95 % CI 0.72-0.95), 0.84 (0.72-0.95), and 0.97 (0.93-1.0), respectively. LRG was enriched in diseased appendices and its abundance correlated with severity of appendicitis. Conclusions High accuracy mass spectrometry urine proteome profiling allowed identification of diagnostic markers of acute appendicitis. Usage of LRG and other identified biomarkers may improve the diagnostic accuracy of clinical evaluations of appendicitis. PMID:19556024

  16. SpaceNav - A high accuracy navigation system for space applications

    Science.gov (United States)

    Evers, H.-H.

    The technology of the SpaceNav-system is based on research performed by the Institute of Flight Guidance and Control at the Technical University of Braunschweig, Germany. In 1989 this institute gave the worlds first public demonstration of a fully automatic landing of an aircraft, using inertial and satellite informations exclusively. The SpaceNav device components are: Acceleration-/Gyro Sensor Package; Global Positioning System (GPS) Receiver/optional more than one; Time Reference Unit; CPU; Telemetry (optional); and Differential GPS (DGPS) Receiver (optional). The coupling of GPS receivers with inertial sensors provides an extremely accurate navigation data set in real time applications even in phases with high dynamic conditions. The update rate of this navigation information is up to 100 Hz with the same accuracy in 3D-position, velocity, acceleration, attitude and time. SpaceNav is an integrated navigation system, which operates according to the principle of combining the longterm stability and accuracy of GPS, and the high level of dynamic precision of conventional inertial navigation system (INS) strapdown systems. The system's design allows other aiding sensors e.g. GLONASS satellite navigation system, distance measuring equipment (DME), altimeter (radar and/or barometric), flux valve etc. to be connected, in order to increase the redundancy of the system. The advantage of such an upgraded system is the availability of more sensor information than necessary for a navigation solution. The resulting redundancy in range measurement allows real-time detection and identification of sensor signals that are incompatible with the other information. As a result you get Receiver Autonomous Integrity Monitoring (RAIM) as described in 'A Multi-Sensor Approach to Assuring GPS Integrity', presented by Alison Brown in the March/April 1990 issue of 'GPS World'. In this paper the author presents information about the principles of the Satellite Navigation System GPS, and

  17. Deep Learning in Label-free Cell Classification

    National Research Council Canada - National Science Library

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-01-01

    .... Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification...

  18. Ship Classification with High Resolution TerraSAR-X Imagery Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhi Zhao

    2013-01-01

    Full Text Available Ship surveillance using space-borne synthetic aperture radar (SAR, taking advantages of high resolution over wide swaths and all-weather working capability, has attracted worldwide attention. Recent activity in this field has concentrated mainly on the study of ship detection, but the classification is largely still open. In this paper, we propose a novel ship classification scheme based on analytic hierarchy process (AHP in order to achieve better performance. The main idea is to apply AHP on both feature selection and classification decision. On one hand, the AHP based feature selection constructs a selection decision problem based on several feature evaluation measures (e.g., discriminability, stability, and information measure and provides objective criteria to make comprehensive decisions for their combinations quantitatively. On the other hand, we take the selected feature sets as the input of KNN classifiers and fuse the multiple classification results based on AHP, in which the feature sets’ confidence is taken into account when the AHP based classification decision is made. We analyze the proposed classification scheme and demonstrate its results on a ship dataset that comes from TerraSAR-X SAR images.

  19. Upsampling range camera depth maps using high-resolution vision camera and pixel-level confidence classification

    Science.gov (United States)

    Tian, Chao; Vaishampayan, Vinay; Zhang, Yifu

    2011-03-01

    We consider the problem of upsampling a low-resolution depth map generated by a range camera, by using information from one or more additional high-resolution vision cameras. The goal is to provide an accurate high resolution depth map from the viewpoint of one of the vision cameras. We propose an algorithm that first converts the low resolution depth map into a depth/disparity map through coordinate mappings into the coordinate frame of one vision camera, then classifies the pixels into regions according to whether the range camera depth map is trustworthy, and finally refine the depth values for the pixels in the untrustworthy regions. For the last refinement step, both a method based on graph cut optimization and that based on bilateral filtering are examined. Experimental results show that the proposed methods using classification are able to upsample the depth map by a factor of 10 x 10 with much improved depth details, with significantly better accuracy comparing to those without the classification. The improvements are visually perceptible on a 3D auto-stereoscopic display.

  20. High accuracy of family history of melanoma in Danish melanoma cases

    DEFF Research Database (Denmark)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-01-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...... probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree...

  1. High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li

    CERN Multimedia

    Le scornet, G

    2002-01-01

    The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.

  2. Arithmetic Accuracy in Children From High- and Low-Income Schools

    Directory of Open Access Journals (Sweden)

    Elida V. Laski

    2016-04-01

    Full Text Available This study investigated income group differences in kindergartners’ and first graders’ (N = 161 arithmetic by examining the link between accuracy and strategy use on simple and complex addition problems. Low-income children were substantially less accurate than high-income children, in terms of both percentage of correctly solved problems and the magnitude of errors, with low-income first graders being less accurate than high-income kindergartners. Higher-income children were more likely to use sophisticated mental strategies than their lower-income peers, who used predominantly inefficient counting or inappropriate strategies. Importantly, this difference in strategies mediated the relation between income group and addition. Examining underlying strategies has implications for understanding income group differences in arithmetic and potential means of remedying it via instruction.

  3. High Accuracy Liquid Propellant Slosh Predictions Using an Integrated CFD and Controls Analysis Interface

    Science.gov (United States)

    Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  4. Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions

    Science.gov (United States)

    Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  5. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    Science.gov (United States)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  6. Initial development of high-accuracy CFRP panel for DATE5 antenna

    Science.gov (United States)

    Qian, Yuan; Lou, Zheng; Hao, Xufeng; Zhu, Jing; Cheng, Jingquan; Wang, Hairen; Zuo, Yingxi; Yang, Ji

    2016-07-01

    DATE5 antenna, which is a 5m telescope for terahertz exploration, will be sited at Dome A, Antarctica. It is necessary to keep high surface accuracy of the primary reflector panels so that high observing efficiency can be achieved. In antenna field, carbon fiber reinforced composite (CFRP) sandwich panels are widely used as these panels are light in weight, high in strength, low in thermal expansion, and cheap in mass fabrication. In DATE5 project, CFRP panels are important panel candidates. In the design study phase, a CFRP prototype panel of 1-meter size is initially developed for the verification purpose. This paper introduces the material arrangement in the sandwich panel, measured performance of this testing sandwich structure samples, and together with the panel forming process. For anti-icing in the South Pole region, a special CFRP heating film is embedded in the front skin of sandwich panel. The properties of some types of basic building materials are tested. Base on the results, the deformation of prototype panel with different sandwich structures and skin layers are simulated and a best structural concept is selected. The panel mold used is a high accuracy one with a surface rms error of 1.4 μm. Prototype panels are replicated from the mold. Room temperature curing resin is used to reduce the thermal deformation in the resin transfer process. In the curing, vacuum negative pressure technology is also used to increase the volume content of carbon fiber. After the measurement of the three coordinate measure machine (CMM), a prototype CFRP panel of 5.1 μm rms surface error is developed initially.

  7. Accuracy Improvement of Spectral Classification of Crop Using Micro wave Backscatter Data%微波后向散射数据改进农作物光谱分类精度研究

    Institute of Scientific and Technical Information of China (English)

    贾坤; 李强子; 田亦陈; 吴炳方; 张飞飞; 蒙继华

    2011-01-01

    In the present study, VV polarization microwave backscatter data used for improving accuracies of spectral classification of crop is investigated. Classification accuracy using different classifiers based on the fusion data of HJ satellite multi-spectral and Envisat ASAR VV backscatter data are compared. The results indicate that fusion data can take full advantage of spectral information of HJ multi-spectral data and the structure sensitivity feature of ASAR VV polarization data. The fusion data enlarges the spectral difference among different classifications and improves crop classification accuracy. The classification accuracy using fusion data can be increased by 5 percent compared to the single HJ data. Furthermore, ASAR VV polarization data is sensitive to non-agrarian area of planted field, and VV polarization data joined classification can effectively distinguish the field border. VV polarization data associating with multi-spectral data used in crop classification enlarges the application of satellite data and has the potential of spread in the domain of agriculture.%利用实验区环境星多光谱数据与Envisat ASAR VV极化数据进行融合.讨论了VV极化微波后向散射数据用于改善多光谱遥感数据农作物分类的精度,并比较了不同分类方法的分类精度.结果表明,两种数据之间的融合充分利用了环境星数据的光谱信息和VV极化数据对于地物结构敏感的特征,不但增强了不同地物之间的光谱差异,而且提高了作物分类精度.两者融合后分类精度比单独使用环境星数据分类精度提高了约5个百分点,而且由于WW极化数据对田间非耕地信息的敏感性,对于田块边界的识别效果有很大的改善.通过该研究探讨了VV极化数据和多光谱数据融合在作物分类中的应用,拓展了遥感数据在农业领域应用的范围,具有推广价值.

  8. Usability and accuracy of high-resolution detectors for daily quality assurance for robotic radiosurgery

    Directory of Open Access Journals (Sweden)

    Loutfi-Krauss Britta

    2017-09-01

    Full Text Available For daily CyberKnife QA a Winston-Lutz-Test (Automated-Quality-Assurance, AQA is used to determine sub-millimeter deviations in beam delivery accuracy. This test is performed using gafchromic film, an extensive and user-dependent method requiring the use of disposables. We therefore analyzed the usability and accuracy of high-resolution detector arrays. We analyzed a liquid-filled ionization-chamber array (Octavius 1000SRS, PTW, Germany, which has a central resolution of 2.5mm. To test sufficient sensitivity, beam profiles with robot shifts of 0.1mm along the arrays' axes were measured. The detected deviation between the shifted and central profile were compared to the real robot's position. We then compared the results to the SRS-Profiler (SunNuclear, USA with 4.0mm resolution and to the Nonius (QUART, Germany, a single-line diode detector with 2.8mm resolution. Finally, AQA variance and usability were analyzed performing a number of AQA tests over time, which required the use of specially designed fixtures for each array, and the results were compared to film. Concerning sensitivity, the 1000SRS detected the beam profile shifts with a maximum difference of 0.11mm (mean deviation = 0.03mm compared to the actual robot shift. The Nonius and SRS-Profiler showed differences of up to 0.15mm and 0.69mm with mean deviation of 0.05mm and 0.18mm, respectively. Analyzing the variation of AQA results over time, the 1000SRS showed a comparable standard deviation to film (0.26mm vs. 0.18mm. The SRS-Profiler and the Nonius showed a standard deviation of 0.16mm and 0.24mm, respectively. The 1000SRS seems to provide equivalent accuracy and sensitivity to the gold standard film when performing daily AQA tests. Compared to other detectors in our study the sensitivity as well as the accuracy of the 1000SRS appears to be superior and more user-friendly. Furthermore, no significant modification of the standard AQA procedure is required when introducing 1000SRS for

  9. PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components

    CERN Document Server

    Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia

    2016-01-01

    The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...

  10. An output amplitude configurable wideband automatic gain control with high gain step accuracy

    Institute of Scientific and Technical Information of China (English)

    何晓丰; 莫太山; 马成炎; 叶甜春

    2012-01-01

    An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented.The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs.And what's more,the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy.A zero,which is composed by the source feedback resistance and the source capacity,is introduced to compensate for the pole.The AGC is fabricated in a 0.18 μm CMOS process.The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB.The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA,and the die area is 800 × 300μm2.

  11. Accuracy of the high-throughput amplicon sequencing to identify species within the genus Aspergillus.

    Science.gov (United States)

    Lee, Seungeun; Yamamoto, Naomichi

    2015-12-01

    This study characterized the accuracy of high-throughput amplicon sequencing to identify species within the genus Aspergillus. To this end, we sequenced the internal transcribed spacer 1 (ITS1), β-tubulin (BenA), and calmodulin (CaM) gene encoding sequences as DNA markers from eight reference Aspergillus strains with known identities using 300-bp sequencing on the Illumina MiSeq platform, and compared them with the BLASTn outputs. The identifications with the sequences longer than 250 bp were accurate at the section rank, with some ambiguities observed at the species rank due to mostly cross detection of sibling species. Additionally, in silico analysis was performed to predict the identification accuracy for all species in the genus Aspergillus, where 107, 210, and 187 species were predicted to be identifiable down to the species rank based on ITS1, BenA, and CaM, respectively. Finally, air filter samples were analysed to quantify the relative abundances of Aspergillus species in outdoor air. The results were reproducible across biological duplicates both at the species and section ranks, but not strongly correlated between ITS1 and BenA, suggesting the Aspergillus detection can be taxonomically biased depending on the selection of the DNA markers and/or primers.

  12. Real-Time and High-Accuracy Arctangent Computation Using CORDIC and Fast Magnitude Estimation

    Directory of Open Access Journals (Sweden)

    Luca Pilato

    2017-03-01

    Full Text Available This paper presents an improved VLSI (Very Large Scale of Integration architecture for real-time and high-accuracy computation of trigonometric functions with fixed-point arithmetic, particularly arctangent using CORDIC (Coordinate Rotation Digital Computer and fast magnitude estimation. The standard CORDIC implementation suffers of a loss of accuracy when the magnitude of the input vector becomes small. Using a fast magnitude estimator before running the standard algorithm, a pre-processing magnification is implemented, shifting the input coordinates by a proper factor. The entire architecture does not use a multiplier, it uses only shift and add primitives as the original CORDIC, and it does not change the data path precision of the CORDIC core. A bit-true case study is presented showing a reduction of the maximum phase error from 414 LSB (angle error of 0.6355 rad to 4 LSB (angle error of 0.0061 rad, with small overheads of complexity and speed. Implementation of the new architecture in 0.18 µm CMOS technology allows for real-time and low-power processing of CORDIC and arctangent, which are key functions in many embedded DSP systems. The proposed macrocell has been verified by integration in a system-on-chip, called SENSASIP (Sensor Application Specific Instruction-set Processor, for position sensor signal processing in automotive measurement applications.

  13. High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be

    CERN Multimedia

    2002-01-01

    State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...

  14. High-accuracy same-beam VLBI observations using Shanghai and Urumqi telescopes

    Institute of Scientific and Technical Information of China (English)

    KIKUCHI; Fuyuhiko; KAMATA; Shun’ichi; MATSUMOTO; Koji; HANADA; Hideo

    2009-01-01

    The same-beam VLBI observations of Rstar and Vstar,which were two small satellites of Japanese lunar mission,SELENE,were successfully performed by using Shanghai and Urumqi 25-m telescopes. When the separation angle between Rstar and Vstar was less than 0.1 deg,the differential phase delay of the X-band signals between Rstar and Vstar on Shanghai-Urumqi baseline was obtained with a very small error of 0.15 mm rms,which was reduced by 1-2 order compared with the former VLBI results. When the separation angle was less than 0.56 deg,the differential phase delay of the S-band signals was also obtained with a very small error of several mm rms. The orbit determination for Rstar and Vstar was performed,and the accuracy was improved to a level of several meters by using VLBI and Doppler data. The high-accuracy same-beam differential VLBI technique is very useful in orbit determination for a spacecraft,and will be used in orbit determination for Mars missions of China Yinghuo-1 and Russia Phobos-grunt.

  15. High-accuracy same-beam VLBI observations using Shanghai and Urumqi telescopes

    Institute of Scientific and Technical Information of China (English)

    LIU QingHui; PING JingSong; FAN QingYuan; XIA Bo; AN Tao; QIAN ZhiHan; YANG WenJun; ZHANG Hua; WANG Zhen; WANG Na; SHI Xian; KIKUCHI Fuyuhiko; HUANG Qian; KAMATA Shun'ichi; MATSUMOTO Koji; HANADA Hideo; HONG XiaoYu; YU AiLi

    2009-01-01

    The same-beam VLBI observations of Rstar and Vstar, which were two small satellites of Japanese lunar mission, SELENE, were successfully performed by using Shanghai and Urumqi 25-m telescopes.When the separation angle between Rstar and Vstar was less than 0.1 deg, the differential phase delay of the X-band signals between Rstar and Vstar on Shanghai-Urumqi baseline was obtained with a very small error of 0.15 mm rms, which was reduced by 1-2 order compared with the former VLBI results.When the separation angle was less than 0.56 deg, the differential phase delay of the S-band signals was also obtained with a very small error of several mm rms. The orbit determination for Rstar and Vstar was performed, and the accuracy was improved to a level of several meters by using VLBI and Doppler data. The high-accuracy same-beam differential VLBI technique is very useful in orbit determination for a spacecraft, and will be used in orbit determination for Mars missions of China Yinghuo-1 and Russia Phobos-grunt.

  16. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    Science.gov (United States)

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  17. Swing arm profilometer: high accuracy testing for large reaction-bonded silicon carbide optics with a capacitive probe

    Science.gov (United States)

    Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun

    2017-08-01

    A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.

  18. [Diagnostic accuracy of the immersion high-frequency B-scan ultrasonography in chemical injured eyes].

    Science.gov (United States)

    Yang, Qinghua; Chen, Bing; Wang, Liqiang; Li, Zhaohui; Huang, Yifei

    2014-08-01

    To investigate the diagnostic accuracy of the immersion high-frequency B-scan ultrasonography, a noninvasive preoperative diagnosis method, in observing the anterior segment in chemical injured eyes. It was a retrospective study. Sixty-three ocular chemical injury patients (63 eyes), who accepted the keratoplasty or the artificial cornea transplant in PLA General Hospital from May 2011 to May 2013, were included in this study. All the injured eyes were examined by ultrasound bio-microscopy (UBM) and immersion high-frequency B-scan ultrasonography, respectively. The images were analyzed and the results were compared with the intraoperative findings. The observation of lens was the main parameter. All the 63 patients were examined with the UBM and the immersion high-frequency B-scan ultrasonography before the surgery. The findings of the cornea, anterior chamber angle, iris from UBM were consistent with those from the immersion high-frequency B-scan ultrasonography. As for the lens observation, in 32 eyes in which the lens were not detected by UBM, the lens were not detected in only 16 eyes, while 3 eyes with normal lens and 13 eyes with lens pacifications (1 eye with pyknotic lens) by immersion high-frequency B-scan ultrasonography. In 17 eyes in which the lens were found normal by UBM, there were only 14 eyes with normal lens and the rest 3 eyes' lens were found intumescent by immersion high-frequency B-scan ultrasonography. In 6 eyes in which lens were detected with suspicious by UBM, 2 eyes' lens were pyknotic and 4 eyes' lens were intumescent or clouded by immersion high-frequency B-scan ultrasonography. The findings of immersion high-frequency B-scan ultrasonography were highly consistent with the intraoperative findings. The lens could be observed accurately by immersion high-frequency B-scan ultrasonography in chemical injured eyes.

  19. High-accuracy infra-red thermography method using reflective marker arrays

    Science.gov (United States)

    Kirollos, Benjamin; Povey, Thomas

    2017-09-01

    In this paper, we describe a new method for high-accuracy infra-red (IR) thermography measurements in situations with significant spatial variation in reflected radiation from the surroundings, or significant spatial variation in surface emissivity due to viewing angle non-uniformity across the field of view. The method employs a reflective marker array (RMA) on the target surface—typically, high emissivity circular dots—and an integrated image analysis algorithm designed to require minimal human input. The new technique has two particular advantages which make it suited to high-accuracy measurements in demanding environments: (i) it allows the reflected radiation component to be calculated directly, in situ, and as a function of position, overcoming a key problem in measurement environments with non-uniform and unsteady stray radiation from the surroundings; (ii) using image analysis of the marker array (via apparent aspect ratio of the circular reflective markers), the local viewing angle of the target surface can be estimated, allowing corrections for angular variation of local emissivity to be performed without prior knowledge of the geometry. A third advantage of the technique is that allows for simple focus-stacking algorithms due to increased image entropy. The reflective marker array method is demonstrated for an isothermal, hemispherical object exposed to an external IR source arranged to give a significant non-uniform reflected radiation term. This is an example of a challenging environment, both because of the significant non-uniform reflected radiation term, and also the significant variation in target emissivity due to surface angle variation. We demonstrate that the new RMA IR technique leads to significantly lower error in evaluated surface temperature than conventional IR techniques. The method is applicable to any complex radiative environment.

  20. Design and calibration of a high-sensitivity and high-accuracy polarimeter based on liquid crystal variable retarders

    Science.gov (United States)

    Guo, Jing; Ren, De-Qing; Liu, Cheng-Chao; Zhu, Yong-Tian; Dou, Jiang-Pei; Zhang, Xi; Beck, Christian

    2017-01-01

    Polarimetry plays an important role in the measurement of solar magnetic fields. We developed a high-sensitivity and high-accuracy polarimeter (HHP) based on nematic liquid crystal variable retarders (LCVRs), which has a compact setup and no mechanical moving parts. The system design and calibration methods are discussed in detail. The azimuth error of the transmission axis of the polarizer as well as the fast axes of the two LCVRs and the quarter-wave plate were determined using dedicated procedures. Linearly and circularly polarized light were employed to evaluate the performance of the HHP. The experimental results indicate that a polarimetric sensitivity of better than \\[5.7 × {10 - 3}\\] can be achieved by using a single short-exposure image, while an accuracy on the order of 10‑5 can be reached by using a large number of short-exposure images. This makes the HHP a high-performance system that can be used with a ground-based solar telescope for high-precision solar magnetic field investigations.

  1. Vehicle Detection and Classification from High Resolution Satellite Images

    Science.gov (United States)

    Abraham, L.; Sasikumar, M.

    2014-11-01

    In the past decades satellite imagery has been used successfully for weather forecasting, geographical and geological applications. Low resolution satellite images are sufficient for these sorts of applications. But the technological developments in the field of satellite imaging provide high resolution sensors which expands its field of application. Thus the High Resolution Satellite Imagery (HRSI) proved to be a suitable alternative to aerial photogrammetric data to provide a new data source for object detection. Since the traffic rates in developing countries are enormously increasing, vehicle detection from satellite data will be a better choice for automating such systems. In this work, a novel technique for vehicle detection from the images obtained from high resolution sensors is proposed. Though we are using high resolution images, vehicles are seen only as tiny spots, difficult to distinguish from the background. But we are able to obtain a detection rate not less than 0.9. Thereafter we classify the detected vehicles into cars and trucks and find the count of them.

  2. High accuracy Primary Reference gas Mixtures for high-impact greenhouse gases

    Science.gov (United States)

    Nieuwenkamp, Gerard; Zalewska, Ewelina; Pearce-Hill, Ruth; Brewer, Paul; Resner, Kate; Mace, Tatiana; Tarhan, Tanil; Zellweger, Christophe; Mohn, Joachim

    2017-04-01

    Climate change, due to increased man-made emissions of greenhouse gases, poses one of the greatest risks to society worldwide. High-impact greenhouse gases (CO2, CH4 and N2O) and indirect drivers for global warming (e.g. CO) are measured by the global monitoring stations for greenhouse gases, operated and organized by the World Meteorological Organization (WMO). Reference gases for the calibration of analyzers have to meet very challenging low level of measurement uncertainty to comply with the Data Quality Objectives (DQOs) set by the WMO. Within the framework of the European Metrology Research Programme (EMRP), a project to improve the metrology for high-impact greenhouse gases was granted (HIGHGAS, June 2014-May 2017). As a result of the HIGHGAS project, primary reference gas mixtures in cylinders for ambient levels of CO2, CH4, N2O and CO in air have been prepared with unprecedented low uncertainties, typically 3-10 times lower than usually previously achieved by the NMIs. To accomplish these low uncertainties in the reference standards, a number of preparation and analysis steps have been studied and improved. The purity analysis of the parent gases had to be performed with lower detection limits than previously achievable. E.g., to achieve an uncertainty of 2•10-9 mol/mol (absolute) on the amount fraction for N2O, the detection limit for the N2O analysis in the parent gases has to be in the sub nmol/mol domain. Results of an OPO-CRDS analyzer set-up in the 5µm wavelength domain, with a 200•10-12 mol/mol detection limit for N2O, will be presented. The adsorption effects of greenhouse gas components at cylinder surfaces are critical, and have been studied for different cylinder passivation techniques. Results of a two-year stability study will be presented. The fit-for-purpose of the reference materials was studied for possible variation on isotopic composition between the reference material and the sample. Measurement results for a suit of CO2 in air

  3. Very Low Power, Low Voltage, High Accuracy, and High Performance Current Mirror

    Institute of Scientific and Technical Information of China (English)

    Hassan Faraji Baghtash; Khalil Monfaredi; Ahmad Ayatollahi

    2011-01-01

    A novel low power and low voltage current mirror with a very low current copy error is presented and the principle of its operation is discussed.In this circuit,the gain boosting regulated cascode scheme is used to improve the output resistance,while using inverter as an amplifier.The simulation results with HSPICE in TSMC 0.18 μm CMOS technology are given,which verify the high performance of the proposed structure.Simulation results show an input resistance of 0.014 Ω and an output resistance of 3 GΩ.The current copy error is favorable as low as 0.002% together with an input (the minimum input voltage of vin,min~ 0.24 V) and an output (the minimum output voltage of vout,min~ 0.16 V) compliances while working with the 1 V power supply and the 50 μA input current.The current copy error is near zero at the input current of 27 μA.It consumes only 76 μW and introduces a very low output offset current of 50 pA.

  4. ADFE METHOD WITH HIGH ACCURACY FOR NONLINEAR PARABOLIC INTEGRO-DIFFERENTIAL SYSTEM WITH NONLINEAR BOUNDARY CONDITIONS

    Institute of Scientific and Technical Information of China (English)

    崔霞

    2002-01-01

    Alternating direction finite element (ADFE) scheme for d-dimensional nonlinear system of parabolic integro-differential equations is studied. By using a local approximation based on patches of finite elements to treat the capacity term qi(u), decomposition of the coefficient matrix is realized; by using alternating direction, the multi-dimensional problem is reduced to a family of single space variable problems, calculation work is simplified; by using finite element method, high accuracy for space variant is kept; by using inductive hypothesis reasoning, the difficulty coming from the nonlinearity of the coefficients and boundary conditions is treated; by introducing Ritz-Volterra projection, the difficulty coming from the memory term is solved. Finally, by using various techniques for priori estimate for differential equations, the unique resolvability and convergence properties for both FE and ADFE schemes are rigorously demonstrated, and optimal H1 and L2norm space estimates and O((△t)2) estimate for time variant are obtained.

  5. SLSTR: a high accuracy dual scan temperature radiometer for sea and land surface monitoring from space

    Science.gov (United States)

    Coppo, P.; Ricciarelli, B.; Brandani, F.; Delderfield, J.; Ferlet, M.; Mutlow, C.; Munro, G.; Nightingale, T.; Smith, D.; Bianchi, S.; Nicol, P.; Kirschstein, S.; Hennig, T.; Engel, W.; Frerick, J.; Nieke, J.

    2010-10-01

    SLSTR is a high accuracy infrared radiometer which will be embarked in the Earth low-orbit Sentinel 3 operational GMES mission. SLSTR is an improved version of the previous AATSR and ATSR-1/2 instruments which have flown respectively on Envisat and ERS-1/2 ESA missions. SLSTR will provide data continuity with respect to these previous missions but with a substantial improvement due to its higher swaths (750 km in dual view and 1400 km in single view) which should permit global coverage of SST and LST measurements (at 1 km of spatial resolution in IR channels) with daily revisit time, useful for climatological and meteorological applications. Two more SWIR channels and a higher spatial resolution in the VIS/SWIR channels (0.5 km) are also implemented for a better clouds/aerosols screening. Two further additional channels for global scale fire monitoring are present at the same time as the other nominal channels.

  6. Study on Calibration System for Electronic Transformers Based on High-Accuracy PCI Card

    Directory of Open Access Journals (Sweden)

    Mingzhu Zhang

    2013-03-01

    Full Text Available With preliminary applying of Electronic Transformer (ET based on IEC 61850 standards in power grid, the calibrations of tested transformers have attracted extensive research attention. This study proposes a novel Calibration System of ET (CSET based on high-accuracy card. Data acquisition of ET and Standard Trans-former (ST is gotten by optic Ethernet and PCI-4462 data acquisition card, respectively. Meanwhile, the synchronized sampling between ET and ST is completed on the optic/electronic pulse signal of PCI synchronization card. The signals processing and human interface are realized by Labview software. The system proposed in the study is feasible for calibrating Electronic Voltage/Current Transformers (EVT/ECT of different voltage classes. System tests show that the precision of the system can get to 0.2°.

  7. Study on Calibration System for Electronic Transformers Based on High-Accuracy PCI Card

    Directory of Open Access Journals (Sweden)

    Mingzhu Zhang

    2013-05-01

    Full Text Available With preliminary applying of Electronic Transformer (ET based on IEC 61850 standards in power grid, the calibration of tested transformers has attracted extensive research attention. This study proposes a novel Calibration System of ET (CSET based on high-accuracy card. Data acquisition of ET and standard trans-former (ST is gotten by optic Ethernet and PCI-4462 data acquisition card, respectively. Meanwhile, the synchronized sampling between ET and ST is completed on the optic/electronic pulse signal of PCI synchronization card. The signals processing and human interface are realized by Lab view software. The system proposed in the study is feasible for calibrating Electronic Voltage/Current Transformers (EVT/ECT of different voltage classes. System tests show that the precision of the system can get to 0.2°.

  8. High-Accuracy Programmable Timing Generator with Wide-Range Tuning Capability

    Directory of Open Access Journals (Sweden)

    Ting-Li Chu

    2013-01-01

    Full Text Available In this paper, a high-accuracy programmable timing generator with wide-range tuning capability is proposed. With the aid of dual delay-locked loop (DLL, both of the coarse- and fine-tuning mechanisms are operated in precise closed-loop scheme to lessen the effects of the ambient variations. The timing generator can provide sub-gate resolution and instantaneous switching capability. The circuit is implemented and simulated in TSMC 0.18 μm 1P6M technology. The test chip area occupies 1.9 mm2. The reference clock cycle can be divided into 128 bins by interpolation to obtain 14 ps resolution with the clock rate at 550 MHz. The INL and DNL are within −0.21~+0.78 and −0.27~+0.43 LSB, respectively.

  9. Well-posedness of the difference schemes of the high order of accuracy for elliptic equations

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available It is well known the differential equation − u ″ ( t +Au( t =f( t ( −∞high order of accuracy two-step difference schemes generated by an exact difference scheme or by Taylor's decomposition on three points for the approximate solutions of this differential equation. The well-posedness of these difference schemes in the difference analogy of the smooth functions is obtained. The exact almost coercive inequality for solutions in C( τ,E of these difference schemes is established.

  10. High-accuracy measurement of the magnetic moment anomaly of the electron bound in hydrogenlike carbon.

    Science.gov (United States)

    Häffner, H; Beier, T; Hermanspahn, N; Kluge, H J; Quint, W; Stahl, S; Verdú, J; Werth, G

    2000-12-18

    We present a new experimental value for the magnetic moment of the electron bound in hydrogenlike carbon (12C5+): g(exp) = 2.001 041 596 (5). This is the most precise determination of an atomic g(J) factor so far. The experiment was carried out on a single 12C5+ ion stored in a Penning trap. The high accuracy was made possible by spatially separating the induction of spin flips and the analysis of the spin direction. The current theoretical value amounts to g(th) = 2.001 041 591 (7). Together experiment and theory test the bound-state QED contributions to the g(J) factor of a bound electron to a precision of 1%.

  11. High Accuracy Speed-fed Grating Angular Acceleration Measurement System Based on FPGA

    Directory of Open Access Journals (Sweden)

    Hao Zhao

    2012-09-01

    Full Text Available Shaft angular acceleration is one of the most important parameter of rotary machines, the error of angular acceleration increased when the shaft speed up. For this problem, a new high accuracy angular acceleration measurement system is presented, the principle of measurement is self-regulating the period of speed sampling signal according to the proportion of the shaft speed up. This measurement system combined FPGA and SCM, the speed of shaft is received by the timer of SCM responding the interrupts of FPGA, and then set the parameter of frequency divider in FPGA, so as to make the period of speed sampling consistent with the proportion of the speed up. This measurement system could overcome the error when system speed up according to the experiment.

  12. High Accuracy Reference Network (HARN), Published in 2000, 1:600 (1in=50ft) scale, Brown County, WI.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This High Accuracy Reference Network (HARN) dataset, published at 1:600 (1in=50ft) scale, was produced all or in part from Field Survey/GPS information as of 2000....

  13. High-accuracy approximation of high-rank derivatives: isotropic finite differences based on lattice-Boltzmann stencils.

    Science.gov (United States)

    Mattila, Keijo Kalervo; Hegele Júnior, Luiz Adolfo; Philippi, Paulo Cesar

    2014-01-01

    We propose isotropic finite differences for high-accuracy approximation of high-rank derivatives. These finite differences are based on direct application of lattice-Boltzmann stencils. The presented finite-difference expressions are valid in any dimension, particularly in two and three dimensions, and any lattice-Boltzmann stencil isotropic enough can be utilized. A theoretical basis for the proposed utilization of lattice-Boltzmann stencils in the approximation of high-rank derivatives is established. In particular, the isotropy and accuracy properties of the proposed approximations are derived directly from this basis. Furthermore, in this formal development, we extend the theory of Hermite polynomial tensors in the case of discrete spaces and present expressions for the discrete inner products between monomials and Hermite polynomial tensors. In addition, we prove an equivalency between two approaches for constructing lattice-Boltzmann stencils. For the numerical verification of the presented finite differences, we introduce 5th-, 6th-, and 8th-order two-dimensional lattice-Boltzmann stencils.

  14. Study of high-altitude radar altimeter model accuracy and SITAN performance using HAAFT data

    Energy Technology Data Exchange (ETDEWEB)

    Shieves, T.C.; Callahan, M.W.

    1979-07-01

    Radar altimetry data, inertial navigation data, and scoring data were collected under the HAAFT program by Martin Marietta Corporation for the United States Air Force over several areas in the western United States at altitudes ranging from 3 to 20 km. The study reported here uses the HAAFT data in conjunction with Defense Mapping Agency (DMA) topographic data to evaluate the accuracy of a high-altitude pulsed-radar altimeter model and the resulting performance of the terrain-aided guidance concept SITAN. Previous SITAN flight tests at low altitudes (less than 1500 m AGL) have demonstrated 6-20 m CEP. The high-altitude flight test data analyzed herein show a SITAN CEP of 120 m. The radar altimeter model was required to achieve this performance includes the effects of the internal track loop, AGC loop, antenna beamwidth, and the terrain radar cross section and provided a factor of 6 improvement over simple nadir ground clearance for rough terrain. It is postulated that high-altitude CEP could be reduced to 50 m or less if an altimeter were designed specifically for high-altitude terrain sensing.

  15. Tumor Classification Using High-Order Gene Expression Profiles Based on Multilinear ICA

    Directory of Open Access Journals (Sweden)

    Ming-gang Du

    2009-01-01

    Full Text Available Motivation. Independent Components Analysis (ICA maximizes the statistical independence of the representational components of a training gene expression profiles (GEP ensemble, but it cannot distinguish relations between the different factors, or different modes, and it is not available to high-order GEP Data Mining. In order to generalize ICA, we introduce Multilinear-ICA and apply it to tumor classification using high order GEP. Firstly, we introduce the basis conceptions and operations of tensor and recommend Support Vector Machine (SVM classifier and Multilinear-ICA. Secondly, the higher score genes of original high order GEP are selected by using t-statistics and tabulate tensors. Thirdly, the tensors are performed by Multilinear-ICA. Finally, the SVM is used to classify the tumor subtypes. Results. To show the validity of the proposed method, we apply it to tumor classification using high order GEP. Though we only use three datasets, the experimental results show that the method is effective and feasible. Through this survey, we hope to gain some insight into the problem of high order GEP tumor classification, in aid of further developing more effective tumor classification algorithms.

  16. TECHNOLOGICAL PROVISION OF ACCURACY AND QUALITY PARAMETERS OF INTRICATE PROFILE PARTS AT HIGH-SPEED MULTI-COORDINATE MACHINING

    Directory of Open Access Journals (Sweden)

    V. K. Sheleg

    2009-01-01

    Full Text Available The paper considers requirements to CAM-systems for provision of high-speed multi-coordinate milling, principles of generation and recommendations on trajectory programming for high-speed machining, influence of vibration and balancing of the technological system on parameters of  the machining accuracy, characteristics of a cutting tool, types of tool coatings that is rather actual for improvement of accuracy and quality of intricate profile parts.

  17. Change Detection Accuracy and Image Properties: A Study Using Simulated Data

    Directory of Open Access Journals (Sweden)

    Abdullah Almutairi

    2010-06-01

    Full Text Available Simulated data were used to investigate the relationships between image properties and change detection accuracy in a systematic manner. The image properties examined were class separability, radiometric normalization and image spectral band-to-band correlation. The change detection methods evaluated were post-classification comparison, direct classification of multidate imagery, image differencing, principal component analysis, and change vector analysis. The simulated data experiments showed that the relative accuracy of the change detection methods varied with changes in image properties, thus confirming the hypothesis that caution should be used in generalizing from studies that use only a single image pair. In most cases, direct classification and post-classification comparison were the least sensitive to changes in the image properties of class separability, radiometric normalization error and band correlation. Furthermore, these methods generally produced the highest accuracy, or were amongst those with a high accuracy. PCA accuracy was highly variable; the use of four principal components consistently resulted in substantial decreased classification accuracy relative to using six components, or classification using the original six bands. The accuracy of image differencing also varied greatly in the experiments. Of the three methods that require radiometric normalization, image differencing was the method most affected by radiometric error, relative to change vector and classification methods, for classes that have moderate and low separability. For classes that are highly separable, image differencing was relatively unaffected by radiometric normalization error. CVA was found to be the most accurate method for classes with low separability and all but the largest radiometric errors. CVA accuracy tended to be the least affected by changes in the degree of band correlation in situations where the class means were moderately dispersed, or

  18. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  19. High-Frequency Acoustic Sediment Classification in Shallow Water

    CERN Document Server

    Bentrem, F W; Kalcic, M T; Duncan, M E; Bentrem, Frank W.; Sample, John; Kalcic, Maria T.; Duncan, Michael E.

    2002-01-01

    A geoacoustic inversion technique for high-frequency (12 kHz) multibeam sonar data is presented as a means to classify the seafloor sediment in shallow water (40-300 m). The inversion makes use of backscattered data at a variety of grazing angles to estimate mean grain size. The need for sediment type and the large amounts of multibeam data being collected with the Naval Oceanographic Office's Simrad EM 121A systems, have fostered the development of algorithms to process the EM 121A acoustic backscatter into maps of sediment type. The APL-UW (Applied Physics Laboratory at the University of Washington) backscattering model is used with simulated annealing to invert for six geoacoustic parameters. For the inversion, three of the parameters are constrained according to empirical correlations with mean grain size, which is introduced as an unconstrained parameter. The four unconstrained (free) parameters are mean grain size, sediment volume interaction, and two seafloor roughness parameters. Acoustic sediment cla...

  20. Integrative fitting of absorption line profiles with high accuracy, robustness, and speed

    Science.gov (United States)

    Skrotzki, Julian; Habig, Jan Christoph; Ebert, Volker

    2014-08-01

    The principle of the integrative evaluation of absorption line profiles relies on the numeric integration of absorption line signals to retrieve absorber concentrations, e.g., of trace gases. Thus, it is a fast and robust technique. However, previous implementations of the integrative evaluation principle showed shortcomings in terms of accuracy and the lack of a fit quality indicator. This has motivated the development of an advanced integrative (AI) fitting algorithm. The AI fitting algorithm retains the advantages of previous integrative implementations—robustness and speed—and is able to achieve high accuracy by introduction of a novel iterative fitting process. A comparison of the AI fitting algorithm with the widely used Levenberg-Marquardt (LM) fitting algorithm indicates that the AI algorithm has advantages in terms of robustness due to its independence from appropriately chosen start values for the initialization of the fitting process. In addition, the AI fitting algorithm shows speed advantages typically resulting in a factor of three to four shorter computational times on a standard personal computer. The LM algorithm on the other hand retains advantages in terms of a much higher flexibility, as the AI fitting algorithm is restricted to the evaluation of single absorption lines with precomputed line width. Comparing both fitting algorithms for the specific application of in situ laser hygrometry at 1,370 nm using direct tunable diode laser absorption spectroscopy (TDLAS) suggests that the accuracy of the AI algorithm is equivalent to that of the LM algorithm. For example, a signal-to-noise ratio of 80 and better typically yields a deviation of TDLAS hygrometry at the aerosol and cloud chamber aerosol interactions and dynamics in the atmosphere (AIDA)—a unique large-scale facility to study atmospheric processes. The robustness of the AI fitting algorithm has been validated for typical AIDA conditions encompassing strong transmission fluctuations

  1. Improved photomask accuracy with a high-productivity DUV laser pattern generator

    Science.gov (United States)

    Öström, Thomas; Måhlén, Jonas; Karawajczyk, Andrzej; Rosling, Mats; Carlqvist, Per; Askebjer, Per; Karlin, Tord; Sallander, Jesper; Österberg, Anders

    2006-10-01

    A strategy for sub-100 nm technology nodes is to maximize the use of high-speed deep-UV laser pattern generators, reserving e-beam tools for the most critical photomask layers. With a 248 nm excimer laser and 0.82 NA projection optics, the Sigma7500 increases the application space of laser pattern generators. A programmable spatial light modulator (SLM) is imaged with partially coherent optics to compose the photomask pattern. Image profiles are enhanced with phase shifting in the pattern generator, and features below 200 nm are reliably printed. The Sigma7500 extends the SLM-based architecture with improvements to CD uniformity and placement accuracy, resulting from an error budget-based methodology. Among these improvements is a stiffer focus stage design with digital servos, resulting in improved focus stability. Tighter climate controls and improved dose control reduce drift during mask patterning. As a result, global composite CD uniformity below 5 nm (3σ) has been demonstrated, with placement accuracy below 10 nm (3σ) across the mask. Self-calibration methods are used to optimize and monitor system performance, reducing the need to print test plates. The SLM calibration camera views programmed test patterns, making it possible to evaluate image metrics such as CD uniformity and line edge roughness. The camera is also used to characterize image placement over the optical field. A feature called ProcessEqualizer TM has been developed to correct long-range CD errors arising from process effects on production photomasks. Mask data is sized in real time to compensate for pattern-dependent errors related to local pattern density, as well as for systematic pattern-independent errors such as radial CD signatures. Corrections are made in the pixel domain in the advanced adjustments processor, which also performs global biasing, stamp distortion compensation, and corner enhancement. In the Sigma7500, the mask pattern is imaged with full edge addressability in each

  2. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    Science.gov (United States)

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification.

  3. High-level fusion of depth and intensity for pedestrian classification

    NARCIS (Netherlands)

    Rohrbach, M.; Enzweiler, M.; Gavrila, D.M.

    2009-01-01

    This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propose to extract discriminative spatial features (gradient orientation histograms and local

  4. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  5. A new device for liver cancer biomarker detection with high accuracy

    Directory of Open Access Journals (Sweden)

    Shuaipeng Wang

    2015-06-01

    Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.

  6. High Accuracy Decoding of Dynamical Motion from a Large Retinal Population.

    Directory of Open Access Journals (Sweden)

    Olivier Marre

    2015-07-01

    Full Text Available Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar's position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina's population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar's position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.

  7. Classification of microseismic events in high stress zone

    Institute of Scientific and Technical Information of China (English)

    CAO An-ye; DOU Lin-ming; YAN Ru-ling; JIANG Heng; LU Cai-ping; DU Tao-tao; LU Zhen-yu

    2009-01-01

    For the purpose of having a better understanding of failure mechanisms of rock fracturing in mines, the equivalent point source models of tensile, shear and explosive seismic events were established, and the relationship between far-field seismic dis-placements of the waves and the corresponding equivalent forces were analyzed as well. Based on the results of a microseismic monitoring carried out in the mining progress of 9202 working face under the upper remnant coal pillar in Sanhejian Mine, the waveform features of the seismic events associated with different failure modes were further analyzed. The results show that the signals corresponding to different failure mechanisms have different radiation patterns of the seismic displacements, and different characteristics in waveform features, such as dominant frequency, energy released, the ratio of S- to P-wave energy, and so on. In addition, the rock burst happened in the high stress zone is mainly the result of the strong shear fracturing in the mining process. The results of this study have significantly improved the understanding of the characteristics of the failures associated with under-ground mining, and will greatly benefit the prevention and control of rock burst hazards in burst-prone mines.

  8. A New Framework of the Unsupervised Classification for High-Resolution Remote Sensing Image

    Directory of Open Access Journals (Sweden)

    Zhiyong Lv

    2012-11-01

    Full Text Available Classification plays a significant role in change detection when monitoring the evolution of the Earth’s surface. This paper proposes a novel object-oriented framework for the unsupervised classification of high-resolution remote sensing images based on Jenks’ optimization. The fractal net evolution approach is employed as an image segmental technique, the spectral feature of each image object is extracted, and an algorithm of Jenks’ optimization is adopted as a classifier. Two experiments with different image platforms are conducted to evaluate the performance of the proposed framework and to compare with other traditional unsupervised classification algorithms such as the iterative self-organizing data analysis technique algorithm and k-means clustering algorithms. The proposed approach is found to be feasible and valid.

  9. Land Cover and Crop Type Classification along the Season Based on Biophysical Variables Retrieved from Multi-Sensor High-Resolution Time Series

    Directory of Open Access Journals (Sweden)

    François Waldner

    2015-08-01

    Full Text Available With the ever-increasing number of satellites and the availability of data free of charge, the integration of multi-sensor images in coherent time series offers new opportunities for land cover and crop type classification. This article investigates the potential of structural biophysical variables as common parameters to consistently combine multi-sensor time series and to exploit them for land/crop cover classification. Artificial neural networks were trained based on a radiative transfer model in order to retrieve high resolution LAI, FAPAR and FCOVER from Landsat-8 and SPOT-4. The correlation coefficients between field measurements and the retrieved biophysical variables were 0.83, 0.85 and 0.79 for LAI, FAPAR and FCOVER, respectively. The retrieved biophysical variables’ time series displayed consistent average temporal trajectories, even though the class variability and signal-to-noise ratio increased compared to NDVI. Six random forest classifiers were trained and applied along the season with different inputs: spectral bands, NDVI, as well as FAPAR, LAI and FCOVER, separately and jointly. Classifications with structural biophysical variables reached end-of-season overall accuracies ranging from 73%–76% when used alone and 77% when used jointly. This corresponds to 90% and 95% of the accuracy level achieved with the spectral bands and NDVI. FCOVER appears to be the most promising biophysical variable for classification. When assuming that the cropland extent is known, crop type classification reaches 89% with spectral information, 87% with the NDVI and 81%–84% with biophysical variables.

  10. High-resolution CT of nontuberculous mycobacterium infection in adult CF patients: diagnostic accuracy

    Energy Technology Data Exchange (ETDEWEB)

    McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D. [St. Vincent' s University Hospital, Department of Radiology, Dublin (Ireland); DeJong, Pim A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Loeve, Martine; Tiddens, Harm A.W.M. [Erasmus MC-Sophia Children' s Hospital, Department of Radiology, Department of Pediatric Pulmonology and Allergology, Rotterdam (Netherlands); McKone, Edward; Gallagher, Charles G. [St. Vincent' s University Hospital, Department of Respiratory Medicine and National Referral Centre for Adult Cystic Fibrosis, Dublin (Ireland)

    2012-12-15

    To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R{sup 2} = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)

  11. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Directory of Open Access Journals (Sweden)

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  12. High accuracy jog CD control on OPC pattern by advanced laser writer Sigma7500

    Science.gov (United States)

    Chin, Tomas; Wu, Wen-Bin; Shih, Chiang-Lin

    2008-10-01

    With the progress of mask writer technology, 50 KV electron beam writers always perform with better pattern fidelity and critical dimension (CD) control than traditional laser raster-scan writers because laser spot size is confined by the laser longer wavelength relative to electron beam. As far as Optical Proximity Correction (OPC) pattern fidelity is concerned, critical masks with OPC process have to choose Variable-Shape-Beam (VSB) electron beam writer presently. However, the over-aggressive OPC fragmentation induces data volume abrupt explosion, longer writing time, higher mask cost and even mask quality degradation 1. Micronic Sigma7500 laser writer introduces a novel imaging system combining partial coherent light and DUV spatial light modulation (SLM) to generate a high-quality pattern image 2. The benefit of raster-scan laser writer is high throughput with consistent writing time regardless of pattern geometry, complexity and data size. However, pattern CD accuracy still needs improvement. This study is to evaluate jog CD control capability of Sigma7500 on OPC typical line-and-space test patterns with different orientations of 0°, 90°, 45° and 135°. In addition, mask CD uniformity and OPC jog height linearity will also be demonstrated.

  13. High-Accuracy Ring Laser Gyroscopes: Earth Rotation Rate and Relativistic Effects

    Science.gov (United States)

    Beverini, N.; Di Virgilio, A.; Belfi, J.; Ortolan, A.; Schreiber, K. U.; Gebauer, A.; Klügel, T.

    2016-06-01

    The Gross Ring G is a square ring laser gyroscope, built as a monolithic Zerodur structure with 4 m length on all sides. It has demonstrated that a large ring laser provides a sensitivity high enough to measure the rotational rate of the Earth with a high precision of ΔΩE GINGER project is intending to take this level of sensitivity further and to improve the accuracy and the long-term stability. A monolithic structure similar to the G ring laser is not available for GINGER. Therefore the preliminary goal is the demonstration of the feasibility of a larger gyroscope structure, where the mechanical stability is obtained through an active control of the geometry. A prototype moderate size gyroscope (GP-2) has been set up in Pisa in order to test this active control of the ring geometry, while a second structure (GINGERino) has been installed inside the Gran Sasso underground laboratory in order to investigate the properties of a deep underground laboratory in view of an installation of a future GINGER apparatus. The preliminary data on these two latter instruments are presented.

  14. High accuracy and transferability of a neural network potential through charge equilibration for calcium fluoride

    Science.gov (United States)

    Faraji, Somayeh; Ghasemi, S. Alireza; Rostami, Samare; Rasoulkhani, Robabe; Schaefer, Bastian; Goedecker, Stefan; Amsler, Maximilian

    2017-03-01

    We investigate the accuracy and transferability of a recently developed high-dimensional neural network (NN) method for calcium fluoride, fitted to a database of ab initio density functional theory (DFT) calculations based on the Perdew-Burke-Ernzerhof (PBE) exchange correlation functional. We call the method charge equilibration via neural network technique (CENT). Although the fitting database contains only clusters (i.e., nonperiodic structures), the NN scheme accurately describes a variety of bulk properties. In contrast to other available empirical methods the CENT potential has a much simpler functional form, nevertheless it correctly reproduces the PBE energetics of various crystalline phases both at ambient and high pressure. Surface energies and structures as well as dynamical properties derived from phonon calculations are also in good agreement with PBE results. Overall, the difference between the values obtained by the CENT potential and the PBE reference values is less than or equal to the difference between the values of local density approximation (LDA) and Born-Mayer-Huggins (BMH) with those calculated by the PBE exchange correlation functional.

  15. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device.

  16. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Jong-Seok Kim

    2016-01-01

    Full Text Available The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.

  17. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    Science.gov (United States)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  18. Accuracy of the field triage protocol in selecting severely injured patients after high energy trauma.

    Science.gov (United States)

    van Laarhoven, J J E M; Lansink, K W W; van Heijl, M; Lichtveld, R A; Leenen, L P H

    2014-05-01

    For optimal treatment of trauma patients it is of great importance to identify patients who are at risk for severe injuries. The Dutch field triage protocol for trauma patients, the LPA (National Protocol of Ambulance Services), is designed to get the right patient, in the right time, to the right hospital. Purpose of this study was to determine diagnostic accuracy and compliance of this triage protocol. Triage criteria were categorised into physiological condition (P), mechanism of trauma (M) and injury type (I). A retrospective analysis of prospectively collected data of all high-energy trauma patients from 2008 to 2011 in the region Central Netherlands is performed. Diagnostic parameters (sensitivity, specificity, negative predictive value, positive predictive value) of the field triage protocol for selecting severely injured patients were calculated including rates of under- and overtriage. Undertriage was defined as the proportion of severely injured patients (Injury Severity Score (ISS)≥16) who were transported to a level two or three trauma care centre. Overtriage was defined as the proportion of non-severely injured patients (ISSprotocol was 89.1% (95% confidence interval (CI) 84.4-92.6) and 60.5% (95% CI 57.9-63.1), respectively. The overall rate of undertriage was 10.9% (95%CI 7.4-15.7) and the overall rate of overtriage was 39.5% (95%CI 36.9-42.1). These rates were 16.5% and 37.7%, respectively for patients with M+I-P-. Compliance to the triage protocol for patients with M+I-P- was 78.7%. Furthermore, compliance in patients with either a positive I+ or positive P+ was 91.2%. The overall rate of undertriage (10.8%) was mainly influenced by a high rate of undertriage in the group of patients with only a positive mechanism criterion, therefore showing low diagnostic accuracy in selecting severely injured patients. As a consequence these patients with severe injury are undetected using the current triage protocol. As it has been shown that severely injured

  19. Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment

    Science.gov (United States)

    Skovgaard Andersen, Mikkel; Gergely, Áron; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner

    2017-01-01

    The transition zone between land and water is difficult to map with conventional geophysical systems due to shallow water depth and often challenging environmental conditions. The emerging technology of airborne topobathymetric light detection and ranging (lidar) is capable of providing both topographic and bathymetric elevation information, using only a single green laser, resulting in a seamless coverage of the land-water transition zone. However, there is no transparent and reproducible method for processing green topobathymetric lidar data into a digital elevation model (DEM). The general processing steps involve data filtering, water surface detection and refraction correction. Specifically, the procedure of water surface detection and modelling, solely using green laser lidar data, has not previously been described in detail for tidal environments. The aim of this study was to fill this gap of knowledge by developing a step-by-step procedure for making a digital water surface model (DWSM) using the green laser lidar data. The detailed description of the processing procedure augments its reliability, makes it user-friendly and repeatable. A DEM was obtained from the processed topobathymetric lidar data collected in spring 2014 from the Knudedyb tidal inlet system in the Danish Wadden Sea. The vertical accuracy of the lidar data is determined to ±8 cm at a 95 % confidence level, and the horizontal accuracy is determined as the mean error to ±10 cm. The lidar technique is found capable of detecting features with a size of less than 1 m2. The derived high-resolution DEM was applied for detection and classification of geomorphometric and morphological features within the natural environment of the study area. Initially, the bathymetric position index (BPI) and the slope of the DEM were used to make a continuous classification of the geomorphometry. Subsequently, stage (or elevation in relation to tidal range) and a combination of statistical neighbourhood

  20. Classification of High Resolution C-Band PolSAR Data on Polarimetric and Texture Features

    Science.gov (United States)

    Zhao, Lei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Li, Lan

    2014-11-01

    PolSAR image classification is an important technique in the remote sensing area. For high resolution PolSAR image, polarimetric and texture features are equally important for the high resolution PolSAR image classification. The texture features are mainly extracted through Gray Level Co-occurrence Matrix (GLCM) method, but this method has some deficiencies. First, GLCM method can only work on gray-scale images; Secondly, the number of texture features extracted by GLCM method is generally up dozens, or even hundreds. Too many features may exist larger redundancy and will increase the complexity of classification. Therefore, this paper introduces a new texture feature factor-RK that derived from PolSAR image non-Gaussian statistic model.Using the domestic airborne C-band PolSAR image data, we completed classification combined the polarization and texture characteristics.The results showed that this new texture feature factor-RK can overcome the above drawbacks and can achieve same performance compared with GLCM method.

  1. Objective climate classification as a framework for assessing projected climate change in High Mountain Asia

    Science.gov (United States)

    Forsythe, Nathan; Fowler, Hayley; Pritchard, David; Blenkinsop, Stephen

    2016-04-01

    This study builds upon foundational work by Forsythe et al (2015, doi: 10.5194/esd-6-311-2015) which used principal component analysis (PCA) and k-means clustering to derive objective present climate classifications over High Mountain Asia and adjacent regions (60E to 100E, 20N to 40N) based on global meteorological reanalyses' estimates of the drivers of water resources availability and variability (precipitation, surface shortwave radiation, daily mean near surface air temperature and its diurnal range). This study refines Forsythe et al (2015) by testing the potential for spatially disaggregating coarse global reanalyses (and climate model outputs) using iterative classification and regression processing to achieve a 5km (0.05 decimal degree) horizontal resolution in order better capture the severe topographic range and gradients of the HMA domain. This spatial refinement should allow for better intercomparability of resultant classifications derived from datasets with different native resolutions. This intercomparability is critical because the second stage of this assesses climate change projections from a range regional climate model experiments - UK Hadley Centre RQUMP 25km South Asia perturbed physics ensemble, CORDEX South Asia domain and (pending dataset availability) NextData EC-Earth 15km high resolution HMA domain - using derived objective classifications as a framework for aggregation. By establishing sub-regional units of relative homogeneity, the objective classification approach allows twofold assessment of project future climate scenarios, i.e. change can be quantified not only as perturbation of key variables (e.g. precipitation, temperature, etc) but also in terms of the spatial descriptors (areal extent, surface elevation range and mean, latitudinal and longitudinal bounds) of the identified climate zones. It is expected that this novel approach, and in particular the very high target spatial resolution, will yield important insights into the

  2. Segmentation and classification of high resolution imagery for mapping individual species in a closed canopy, deciduous forest

    Institute of Scientific and Technical Information of China (English)

    Timothy; A.; Warner; James; B.; McGraw; Rick; Landenberger

    2006-01-01

    In this paper we investigate the use of a shadow-based delineation program for identifying segments in imagery of a closed canopy, deciduous forest, in West Virginia, USA, as a way to reduce the noise associated with per-pixel classification in forested environments. Shadows typically cluster along the boundaries of trees and therefore can be used to provide a network of nodes for the delineation of segments. A minimum cost path algorithm, where cost is defined as the cumulative sum of brightness values traversed along the connecting route, was used to connect shadow clumps. To test this approach, a series of classifications was undertaken using a multispectral digital aerial image of a six hectare test site and a minimum cost path segmentation. Three species were mapped: oaks, red maple and yellow poplar. The accuracy of an aspatial maximum likelihood classification (termed PERPIXEL classification) was 68.5%, compared to 74.0% for classification using the mean vector of the segments identified with the minimum cost path algorithm (MEAN_SEG), and 78% when the most common class present in the segment is assigned to the entire segment (POSTCLASS_SEG). By comparison, multispectral classification of the multispectral data using the field-mapped polygons of individual trees as segments, produced an accuracy of 82.3% when the mean vector of the polygon was used for classification (MEAN_TREE), and 85.7% when the most common class was assigned to the entire polygon (POSTCLASS_TREE). A moving window-based post-classification majority filter (POSTCLASS_MAJ5BY5) produced an intermediate accuracy value, 73.8%. The minimum cost path segmentation algorithm was found to correctly delineate approximately 28% of the trees. The remaining trees were either segmented, aggregated, or a combination of both segmented and aggregated. Varying the threshold that was used to discriminate shadows appeared to have little effect on the number of correctly delineated trees, or on the overall

  3. Comparative analysis of the processing accuracy of high strength metal sheets by AWJ, laser and plasma

    Science.gov (United States)

    Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.

    2016-08-01

    Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.

  4. Accuracy of Intraocular Lens Power Calculation Formulas for Highly Myopic Eyes

    Science.gov (United States)

    Zhang, Yichi; Liang, Xiao Ying; Liu, Shu; Lee, Jacky W. Y.; Bhaskar, Srinivasan; Lam, Dennis S. C.

    2016-01-01

    Purpose. To evaluate and compare the accuracy of different intraocular lens (IOL) power calculation formulas for eyes with an axial length (AL) greater than 26.00 mm. Methods. This study reviewed 407 eyes of 219 patients with AL longer than 26.0 mm. The refractive prediction errors of IOL power calculation formulas (SRK/T, Haigis, Holladay, Hoffer Q, and Barrett Universal II) using User Group for Laser Interference Biometry (ULIB) constants were evaluated and compared. Results. One hundred seventy-one eyes were enrolled. The Barrett Universal II formula had the lowest mean absolute error (MAE) and SRK/T and Haigis had similar MAE, and the statistical highest MAE were seen with the Holladay and Hoffer Q formulas. The interquartile range of the Barrett Universal II formula was also the lowest among all the formulas. The Barrett Universal II formulas yielded the highest percentage of eyes within ±1.0 D and ±0.5 D of the target refraction in this study (97.24% and 79.56%, resp.). Conclusions. Barrett Universal II formula produced the lowest predictive error and the least variable predictive error compared with the SRK/T, Haigis, Holladay, and Hoffer Q formulas. For high myopic eyes, the Barrett Universal II formula may be a more suitable choice. PMID:27119018

  5. Accuracy of Intraocular Lens Power Calculation Formulas for Highly Myopic Eyes

    Directory of Open Access Journals (Sweden)

    Yichi Zhang

    2016-01-01

    Full Text Available Purpose. To evaluate and compare the accuracy of different intraocular lens (IOL power calculation formulas for eyes with an axial length (AL greater than 26.00 mm. Methods. This study reviewed 407 eyes of 219 patients with AL longer than 26.0 mm. The refractive prediction errors of IOL power calculation formulas (SRK/T, Haigis, Holladay, Hoffer Q, and Barrett Universal II using User Group for Laser Interference Biometry (ULIB constants were evaluated and compared. Results. One hundred seventy-one eyes were enrolled. The Barrett Universal II formula had the lowest mean absolute error (MAE and SRK/T and Haigis had similar MAE, and the statistical highest MAE were seen with the Holladay and Hoffer Q formulas. The interquartile range of the Barrett Universal II formula was also the lowest among all the formulas. The Barrett Universal II formulas yielded the highest percentage of eyes within ±1.0 D and ±0.5 D of the target refraction in this study (97.24% and 79.56%, resp.. Conclusions. Barrett Universal II formula produced the lowest predictive error and the least variable predictive error compared with the SRK/T, Haigis, Holladay, and Hoffer Q formulas. For high myopic eyes, the Barrett Universal II formula may be a more suitable choice.

  6. Rapid, high-accuracy detection of strabismus and amblyopia using the pediatric vision scanner.

    Science.gov (United States)

    Loudon, Sjoukje E; Rook, Caitlin A; Nassif, Deborah S; Piskun, Nadya V; Hunter, David G

    2011-07-07

    Purpose. The Pediatric Vision Scanner (PVS) detects strabismus by identifying ocular fixation in both eyes simultaneously. This study was undertaken to assess the ability of the PVS to identify patients with amblyopia or strabismus, particularly anisometropic amblyopia with no measurable strabismus. Methods. The PVS test, administered from 40 cm and requiring 2.5 seconds of attention, generated a binocularity score (BIN, 0%-100%). We tested 154 patients and 48 controls between the ages of 2 and 18 years. BIN scores of amblyopic children and controls were measured, and 21 children received sequential PVS measurements to detect any changes in BIN resulting from amblyopia treatment. Results. With the pass/refer threshold set at BIN 60%, sensitivity and specificity were 96% for the detection of amblyopia or strabismus. Assuming a 5% prevalence of amblyopia or strabismus, the inferred positive and negative predictive values of the PVS were 56% and 100%, respectively. Fixation accuracy was significantly reduced in amblyopic eyes. In anisometropic amblyopia patients treated successfully, the BIN improved to 100%. Conclusions. The PVS identified children with amblyopia or strabismus with high sensitivity and specificity, while successful treatment restored normal BIN scores in amblyopic patients without strabismus. The results support the hypothesis that the PVS detects strabismus and amblyopia directly. Future strategies for screening by nonspecialists may thus be based on diagnostic detection of amblyopia and strabismus rather than the estimation of risk factors, allowing for rapid, accurate identification of children with amblyopia early in life when it is most amenable to treatment.

  7. A high accuracy broadband measurement system for time resolved complex bioimpedance measurements.

    Science.gov (United States)

    Kaufmann, S; Malhotra, A; Ardelt, G; Ryschka, M

    2014-06-01

    Bioimpedance measurements are useful tools in biomedical engineering and life science. Bioimpedance is the electrical impedance of living tissue and can be used in the analysis of various physiological parameters. Bioimpedance is commonly measured by injecting a small well known alternating current via surface electrodes into an object under test and measuring the resultant surface voltages. It is non-invasive, painless and has no known hazards. This work presents a field programmable gate array based high accuracy broadband bioimpedance measurement system for time resolved bioimpedance measurements. The system is able to measure magnitude and phase of complex impedances under test in a frequency range of about 10-500 kHz with excitation currents from 10 µA to 5 mA. The overall measurement uncertainties stay below 1% for the impedance magnitude and below 0.5° for the phase in most measurement ranges. Furthermore, the described system has a sample rate of up to 3840 impedance spectra per second. The performance of the bioimpedance measurement system is demonstrated with a resistor based system calibration and with measurements on biological samples.

  8. GRACE Data-based High Accuracy Global Static Earth's Gravity Field Model

    Directory of Open Access Journals (Sweden)

    CHEN Qiujie

    2016-04-01

    Full Text Available To recover the highly accurate static earth's gravity field by using GRACE satellite data is one of the hot topics in geodesy. Since linearization errors of dynamic approach quickly increase when extending satellite arc length, we established a modified dynamic approach for processing GRACE orbit and range-rate measurements in this paper, which treated orbit observations of the twin GRACE satellites as approximate values for linearization. Using the GRACE data spanning the period Jan. 2003 to Dec. 2010, containing satellite attitudes, orbits, range-rate, and non-conservative forces, we developed two global static gravity field models. One is the unconstrained solution called Tongji-Dyn01s complete to degree and order 180; the other one is the Tongji-Dyn01k model computed by using Kaula constraint. The comparisons between our models and those latest GRACE-only models (including the AIUB-GRACE03, the GGM05S, the ITSG-Grace2014k and the Tongji-GRACE01 published by different international groups, and the external validations with marine gravity anomalies from DTU13 product and height anomalies from GPS/levelling data, were performed in this study. The results demonstrate that the Tongji-Dyn01s has the same accuracy level with those of the latest GRACE-only models, while the Tongji-Dyn01k model is closer to the EIGEN6C2 than the other GRACE-only models as a whole.

  9. High Accuracy Extraction of Respiratory Sinus Arrhythmia with Statistical Processing using Normal Distribution

    Science.gov (United States)

    Numata, Takashi; Ogawa, Yutaro; Yoshida, Lui; Kotani, Kiyoshi; Jimbo, Yasuhiko

    The autonomic nervous system is important in maintaining homeostasis by mediating the opposing effects of the sympathetic and parasympathetic nervous activity on organs. Although it is known that the amplitude of RSA (Respiratory Sinus Arrhythmia) is an index of parasympathetic nervous activity, it is difficult to estimate that activity in real-time in everyday situations. It is partly caused by body motions and extrasystoles. Also, automatic recognition of the R-wave on electrocardiograms is required for real-time analysis of RSA amplitude, there is an unresolved problem of false recognition of the R-wave. In this paper, we propose a method to evaluate the amplitude of RSA accurately using statistical processing with probabilistic models. Then, we estimate parasympathetic nervous activity during body motion and isometric exercise to examine the validity of the method. As a result, using the proposed method, we demonstrate that the amplitude of RSA can be extracted with false recognition of the R-wave. In addition, an appropriate threshold for the estimate is one or five percent because waveforms of RSA amplitude do not follow the abrupt changes of the parasympathetic nervous activity evoked by isometric exercise with the threshold at ten percent. Furthermore, the method using normal distribution is found to be more appropriate than that of chi-square distribution for statistical processing. Therefore, we expect that the proposed method can evaluate parasympathetic nervous activity with high accuracy in everyday situations.

  10. Raman spectroscopic determination of the molecular constants of the hydrogen isotopologues with high accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Krasch, Bennet; Mirz, Sebastian; Groessle, Robin [Karlsruhe Institute of Technology KIT (Germany). Institute for Technical Physics (ITEP), Tritium Laboratory Karlsruhe (TLK); Collaboration: KATRIN-Collaboration

    2016-07-01

    The interest in the thermodynamic properties of gases as the chemical equilibrium is faced by the challenge of time-consuming and technical extensive experimental setups. One possible solution is the derivation of these properties from the molecular constants. The rotational and vibrational movement of diatomic molecules, as the hydrogen isotopologues, is described by the concept of the rotational anharmonic oscillator. The molecular constants are the free parameters of this concept. Molecular constants themselves can be determined by measuring the line position of rotational and/or rotational transitions e.g. with Raman spectroscopy for hydrogen as it has been done since several years. In this contribution a Raman method was development to measure the molecular constant of the hydrogen isotopologues with high accuracy to obtain reliable results. But not only the method was development but also a complete measurement uncertainty budget was set up. The uncertainty budget contains all possible sources for uncertainties from the measurement period or the analysis process as well the contribution of each single uncertainty. The method and the uncertainty budget were exemplary tested on Deuterium.

  11. Fast-type high-accuracy universal polarimeter using charge-coupled device spectrometer

    Directory of Open Access Journals (Sweden)

    Akifumi Takanabe

    2017-02-01

    Full Text Available A fast, high-accuracy universal polarimeter was developed using a charge-coupled device (CCD spectrometer (CCD-HAUP, to carry out simultaneous optical anisotropic (linear birefringence, LB; linear dichroism, LD and chiroptical (circular birefringence, CB; circular dichroism, CD measurements on single crystals without any pretreatment, in the visible region between 400–680 nm. The principle of the HAUP method is to measure the intensities of emergent light passing through a polarizer, a crystal sample, and then an analyzer, as the azimuth angles of the polarizer and analyzer are independently altered. The CCD-HAUP has the unique feature that white transmitted light intensity can be measured using a CCD spectrometer, compared with the generalized HAUP (G-HAUP system in which monochromatic transmitted light is measured using a photomultiplier. The CCD-HAUP measurements across the entire wavelength region are completed within the G-HAUP measurement time for a single wavelength. The CCD-HAUP drastically reduces the measurement time for a dataset to only 1.5 h, from the 24 h required for the G-HAUP system. LB, LD, CB, and CD measurements of single crystals of α-quartz and enantiomeric photomechanical salicylidenephenylethylamines before, during, and after ultraviolet light irradiation show results comparable to those obtained using the G-HAUP system. The newly developed system is very effective for samples susceptible to degradation induced by external stimuli, such as light and heat.

  12. Statistical downscaling of precipitation using local regression and high accuracy surface modeling method

    Science.gov (United States)

    Zhao, Na; Yue, Tianxiang; Zhou, Xun; Zhao, Mingwei; Liu, Yu; Du, Zhengping; Zhang, Lili

    2017-07-01

    Downscaling precipitation is required in local scale climate impact studies. In this paper, a statistical downscaling scheme was presented with a combination of geographically weighted regression (GWR) model and a recently developed method, high accuracy surface modeling method (HASM). This proposed method was compared with another downscaling method using the Coupled Model Intercomparison Project Phase 5 (CMIP5) database and ground-based data from 732 stations across China for the period 1976-2005. The residual which was produced by GWR was modified by comparing different interpolators including HASM, Kriging, inverse distance weighted method (IDW), and Spline. The spatial downscaling from 1° to 1-km grids for period 1976-2005 and future scenarios was achieved by using the proposed downscaling method. The prediction accuracy was assessed at two separate validation sites throughout China and Jiangxi Province on both annual and seasonal scales, with the root mean square error (RMSE), mean relative error (MRE), and mean absolute error (MAE). The results indicate that the developed model in this study outperforms the method that builds transfer function using the gauge values. There is a large improvement in the results when using a residual correction with meteorological station observations. In comparison with other three classical interpolators, HASM shows better performance in modifying the residual produced by local regression method. The success of the developed technique lies in the effective use of the datasets and the modification process of the residual by using HASM. The results from the future climate scenarios show that precipitation exhibits overall increasing trend from T1 (2011-2040) to T2 (2041-2070) and T2 to T3 (2071-2100) in RCP2.6, RCP4.5, and RCP8.5 emission scenarios. The most significant increase occurs in RCP8.5 from T2 to T3, while the lowest increase is found in RCP2.6 from T2 to T3, increased by 47.11 and 2.12 mm, respectively.

  13. Statistical downscaling of precipitation using local regression and high accuracy surface modeling method

    Science.gov (United States)

    Zhao, Na; Yue, Tianxiang; Zhou, Xun; Zhao, Mingwei; Liu, Yu; Du, Zhengping; Zhang, Lili

    2016-03-01

    Downscaling precipitation is required in local scale climate impact studies. In this paper, a statistical downscaling scheme was presented with a combination of geographically weighted regression (GWR) model and a recently developed method, high accuracy surface modeling method (HASM). This proposed method was compared with another downscaling method using the Coupled Model Intercomparison Project Phase 5 (CMIP5) database and ground-based data from 732 stations across China for the period 1976-2005. The residual which was produced by GWR was modified by comparing different interpolators including HASM, Kriging, inverse distance weighted method (IDW), and Spline. The spatial downscaling from 1° to 1-km grids for period 1976-2005 and future scenarios was achieved by using the proposed downscaling method. The prediction accuracy was assessed at two separate validation sites throughout China and Jiangxi Province on both annual and seasonal scales, with the root mean square error (RMSE), mean relative error (MRE), and mean absolute error (MAE). The results indicate that the developed model in this study outperforms the method that builds transfer function using the gauge values. There is a large improvement in the results when using a residual correction with meteorological station observations. In comparison with other three classical interpolators, HASM shows better performance in modifying the residual produced by local regression method. The success of the developed technique lies in the effective use of the datasets and the modification process of the residual by using HASM. The results from the future climate scenarios show that precipitation exhibits overall increasing trend from T1 (2011-2040) to T2 (2041-2070) and T2 to T3 (2071-2100) in RCP2.6, RCP4.5, and RCP8.5 emission scenarios. The most significant increase occurs in RCP8.5 from T2 to T3, while the lowest increase is found in RCP2.6 from T2 to T3, increased by 47.11 and 2.12 mm, respectively.

  14. Geometric Accuracy Investigations of SEVIRI High Resolution Visible (HRV) Level 1.5 Imagery

    National Research Council Canada - National Science Library

    Sultan Kocaman Aksakal

    2013-01-01

    .... In a joint project between the Swiss GCOS Office and ETH Zurich, geometric accuracy and temporal stability of 1-km resolution HRV channel imagery of SEVIRI have been evaluated over Switzerland...

  15. The regulatory benefits of high levels of affect perception accuracy: a process analysis of reactions to stressors in daily life.

    Science.gov (United States)

    Robinson, Michael D; Moeller, Sara K; Buchholz, Maria M; Boyd, Ryan L; Troop-Gordon, Wendy

    2012-08-01

    Individuals attuned to affective signals from the environment may possess an advantage in the emotion-regulation realm. In two studies (total n = 151), individual differences in affective perception accuracy were assessed in an objective, performance-based manner. Subsequently, the same individuals completed daily diary protocols in which daily stressor levels were reported as well as problematic states shown to be stress-reactive in previous studies. In both studies, individual differences in affect perception accuracy interacted with daily stressor levels to predict the problematic outcomes. Daily stressors precipitated problematic reactions--whether depressive feelings (study 1) or somatic symptoms (study 2)--at low levels of affect perception accuracy, but did not do so at high levels of affect perception accuracy. The findings support a regulatory view of such perceptual abilities. Implications for understanding emotion regulation processes, emotional intelligence, and individual differences in reactivity are discussed.

  16. [Accuracy of liquid-based cytology in diagnosis of high-grade squamous cervical intraepithelial neoplasia].

    Science.gov (United States)

    Li, Min; Mei, Ping; Luo, Dong-lan; Wang, Xiao-bing; Liu, Yan-hui

    2012-04-01

    To investigate factors affecting the diagnostic accuracy of cervical liquid-based cytology for high-grade squamous intraepithelial lesion (HSIL). A retrospective evaluation of cytological and histological slides was performed in 415 patients who had cytological HSIL between 2007 and 2010. Among 42 209 cases screened by ThinPrep liquid-based cytology, 415 cases (1.0%) of HSIL were eventually identified. The mean age of HSIL patients was 41.6 years, and 30-49 years were the most common age group. Among 415 cases, 325 patients had available histological diagnosis as follows: 23 (7.1%) negative, 22 (6.8%) CIN1/HPV, 223 (68.6%) CIN2/CIN3, and 57 (17.5%) squamous cell carcinoma (SCC). The positive predictive values of HSIL to predict CIN2 (or higher grade of dysplasia) and CIN1 were 86.2% (280/325) and 92.9% (302/325), respectively. Inadequate biopsy, reactive glandular cells, islet atrophy, chemo/radiotherapy and others were responsible for the cytologically false-positive diagnosis. Fifty-seven (17.5%) cases of HSIL had a histological diagnosis of SCC. The possible causes of misdiagnosis were social factors, under-recognized cytological features of poorly-differentiated SCC and absence of typical diagnostic features in cytology slides. Cytology of HSIL has a high positive predictive value for the presence of CIN2/CIN3 and SCC. Cytologists and gynecologists should be aware of the diagnostic pitfalls that may lead to the discrepancy between cytology and histology.

  17. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack

    2013-09-18

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  18. Direct Georeferencing : a New Standard in Photogrammetry for High Accuracy Mapping

    Science.gov (United States)

    Rizaldy, A.; Firdaus, W.

    2012-07-01

    Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP) and the Aerial Triangulation (AT), to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z), and IMU recording the camera orientation (omega, phi, kappa). Both parameters merged into Exterior Orientation (EO) parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP) were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.

  19. Efficacy of the Kyoto Classification of Gastritis in Identifying Patients at High Risk for Gastric Cancer.

    Science.gov (United States)

    Sugimoto, Mitsushige; Ban, Hiromitsu; Ichikawa, Hitomi; Sahara, Shu; Otsuka, Taketo; Inatomi, Osamu; Bamba, Shigeki; Furuta, Takahisa; Andoh, Akira

    2017-01-01

    Objective The Kyoto gastritis classification categorizes the endoscopic characteristics of Helicobacter pylori (H. pylori) infection-associated gastritis and identifies patterns associated with a high risk of gastric cancer. We investigated its efficacy, comparing scores in patients with H. pylori-associated gastritis and with gastric cancer. Methods A total of 1,200 patients with H. pylori-positive gastritis alone (n=932), early-stage H. pylori-positive gastric cancer (n=189), and successfully treated H. pylori-negative cancer (n=79) were endoscopically graded according to the Kyoto gastritis classification for atrophy, intestinal metaplasia, fold hypertrophy, nodularity, and diffuse redness. Results The prevalence of O-II/O-III-type atrophy according to the Kimura-Takemoto classification in early-stage H. pylori-positive gastric cancer and successfully treated H. pylori-negative cancer groups was 45.1%, which was significantly higher than in subjects with gastritis alone (12.7%, pgastritis scores of atrophy and intestinal metaplasia in the H. pylori-positive cancer group were significantly higher than in subjects with gastritis alone (all pgastritis classification may thus be useful for detecting these patients.

  20. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    Science.gov (United States)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  1. Jet Flavor Classification in High-Energy Physics with Deep Neural Networks

    CERN Document Server

    Guest, Daniel; Baldi, Pierre; Hsu, Shih-Chieh; Urban, Gregor; Whiteson, Daniel

    2016-01-01

    Classification of jets as originating from light-flavor or heavy-flavor quarks is an important task for inferring the nature of particles produced in high-energy collisions. The large and variable dimensionality of the data provided by the tracking detectors makes this task difficult. The current state-of-the-art tools require expert data-reduction to convert the data into a fixed low-dimensional form that can be effectively managed by shallow classifiers. We study the application of deep networks to this task, attempting classification at several levels of data, starting from a raw list of tracks. We find that the highest-level lowest-dimensionality expert information sacrifices information needed for classification, that the performance of current state-of-the-art taggers can be matched or slightly exceeded by deep-network-based taggers using only track and vertex information, that classification using only lowest-level highest-dimensionality tracking information remains a difficult task for deep networks, ...

  2. Application of Object Based Classification and High Resolution Satellite Imagery for Savanna Ecosystem Analysis

    Directory of Open Access Journals (Sweden)

    Jane Southworth

    2010-12-01

    Full Text Available Savanna ecosystems are an important component of dryland regions and yet are exceedingly difficult to study using satellite imagery. Savannas are composed are varying amounts of trees, shrubs and grasses and typically traditional classification schemes or vegetation indices cannot differentiate across class type. This research utilizes object based classification (OBC for a region in Namibia, using IKONOS imagery, to help differentiate tree canopies and therefore woodland savanna, from shrub or grasslands. The methodology involved the identification and isolation of tree canopies within the imagery and the creation of tree polygon layers had an overall accuracy of 84%. In addition, the results were scaled up to a corresponding Landsat image of the same region, and the OBC results compared to corresponding pixel values of NDVI. The results were not compelling, indicating once more the problems of these traditional image analysis techniques for savanna ecosystems. Overall, the use of the OBC holds great promise for this ecosystem and could be utilized more frequently in studies of vegetation structure.

  3. HIGH ACCURACY FINITE VOLUME ELEMENT METHOD FOR TWO-POINT BOUNDARY VALUE PROBLEM OF SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    王同科

    2002-01-01

    In this paper, a high accuracy finite volume element method is presented for two-point boundary value problem of second order ordinary differential equation, which differs fromthe high order generalized difference methods. It is proved that the method has optimal order er-ror estimate O(h3) in H1 norm. Finally, two examples show that the method is effective.

  4. Accuracy of prediction of percentage lean meat and authorization of carcass measurement instruments: adverse effects of incorrect sampling of carcasses in pig classification.

    NARCIS (Netherlands)

    Engel, B.; Buist, W.G.; Walstra, P.; Olsen, E.; Daumas, G.

    2003-01-01

    Classification of pig carcasses in the European Community is based on the lean meat percentage of the carcass. The lean meat percentage is predicted from instrumental carcass measurements, such as fat and muscle depth measurements, obtained in the slaughter-line. The prediction formula employed is

  5. Diagnosing multibacillary leprosy: A comparative evaluation of diagnostic accuracy of slit-skin smear, bacterial index of granuloma and WHO operational classification

    Directory of Open Access Journals (Sweden)

    Bhushan Premanshu

    2008-01-01

    Full Text Available Background: In view of the relatively poor performance of skin smears WHO adopted a purely clinical operational classification, however the poor specificity of operational classification leads to overdiagnosis and unwarranted overtreatment while the poor sensitivity leads to underdiagnosis of multibacillary (MB cases with inadequate treatment. Bacilli are more frequently and abundantly demonstrated in tissue sections. Aims and Methods: We compared WHO classification, slit-skin smears (SSS and demonstration of bacilli in biopsies (bacterial index of granuloma or BIG with regards to their efficacy in correctly identifying multibacillary cases. The tests were done on 141 patients and were evaluated for their ability to diagnose true MB leprosy using detailed statistical analysis. Results: A total of 76 patients were truly MB with either positive smears, BIG positivity or with a typical histology of BB, BL or LL. Amongst these 76 true-MB patients, WHO operational classification correctly identified multibacillary status in 56 (73.68%, and SSS in 43 (56.58%, while BIG correctly identified 65 (85.53% true-MB cases. Conclusion: BIG was most sensitive and effective of the three methods especially in paucilesional patients. We suggest adding estimation of bacterial index of granuloma in the diagnostic workup of paucilesional patients.

  6. Numerical simulation for accuracy of velocity analysis in small-scale high-resolution marine multichannel seismic technology

    Science.gov (United States)

    Luo, Di; Cai, Feng; Wu, Zhiqiang

    2017-06-01

    When used with large energy sparkers, marine multichannel small-scale high-resolution seismic detection technology has a high resolution, high-detection precision, a wide applicable range, and is very flexible. Positive results have been achieved in submarine geological research, particularly in the investigation of marine gas hydrates. However, the amount of traveltime difference information is reduced for the velocity analysis under conditions of a shorter spread length, thus leading to poorer focusing of the velocity spectrum energy group and a lower accuracy of the velocity analysis. It is thus currently debatable whether the velocity analysis accuracy of short-arrangement multichannel seismic detection technology is able to meet the requirements of practical application in natural gas hydrate exploration. Therefore, in this study the bottom boundary of gas hydrates (Bottom Simulating Reflector, BSR) is used to conduct numerical simulation to discuss the accuracy of the velocity analysis related to such technology. Results show that a higher dominant frequency and smaller sampling interval are not only able to improve the seismic resolution, but they also compensate for the defects of the short-arrangement, thereby improving the accuracy of the velocity analysis. In conclusion, the accuracy of the velocity analysis in this small-scale, high-resolution, multi-channel seismic detection technology meets the requirements of natural gas hydrate exploration.

  7. Towards Building Reliable, High-Accuracy Solar Irradiance Database For Arid Climates

    Science.gov (United States)

    Munawwar, S.; Ghedira, H.

    2012-12-01

    Middle East's growing interest in renewable energy has led to increased activity in solar technology development with the recent commissioning of several utility-scale solar power projects and many other commercial installations across the Arabian Peninsula. The region, lying in a virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2, is also one of the most promising candidates for solar energy deployment. However, it is not the availability of resource, but its characterization and reasonably accurate assessment that determines the application potential. Solar irradiance, magnitude and variability inclusive, is the key input in assessing the economic feasibility of a solar system. The accuracy of such data is of critical importance for realistic on-site performance estimates. This contribution aims to identify the key stages in developing a robust solar database for desert climate by focusing on the challenges that an arid environment presents to parameterization of solar irradiance attenuating factors. Adjustments are proposed based on the currently available resource assessment tools to produce high quality data for assessing bankability. Establishing and maintaining ground solar irradiance measurements is an expensive affair and fairly limited in time (recently operational) and space (fewer sites) in the Gulf region. Developers within solar technology industry, therefore, rely on solar radiation models and satellite-derived data for prompt resource assessment needs. It is imperative that such estimation tools are as accurate as possible. While purely empirical models have been widely researched and validated in the Arabian Peninsula's solar modeling history, they are known to be intrinsically site-specific. A primal step to modeling is an in-depth understanding of the region's climate, identifying the key players attenuating radiation and their appropriate characterization to determine solar irradiance. Physical approach

  8. Multipath sparse coding for scene classification in very high resolution satellite imagery

    Science.gov (United States)

    Fan, Jiayuan; Tan, Hui Li; Lu, Shijian

    2015-10-01

    With the rapid development of various satellite sensors, automatic and advanced scene classification technique is urgently needed to process a huge amount of satellite image data. Recently, a few of research works start to implant the sparse coding for feature learning in aerial scene classification. However, these previous research works use the single-layer sparse coding in their system and their performances are highly related with multiple low-level features, such as scale-invariant feature transform (SIFT) and saliency. Motivated by the importance of feature learning through multiple layers, we propose a new unsupervised feature learning approach for scene classification on very high resolution satellite imagery. The proposed unsupervised feature learning utilizes multipath sparse coding architecture in order to capture multiple aspects of discriminative structures within complex satellite scene images. In addition, the dense low-level features are extracted from the raw satellite data by using different image patches with varying size at different layers, and this approach is not limited to a particularly designed feature descriptors compared with the other related works. The proposed technique has been evaluated on two challenging high-resolution datasets, including the UC Merced dataset containing 21 different aerial scene categories with a 1 foot resolution and the Singapore dataset containing 5 land-use categories with a 0.5m spatial resolution. Experimental results show that it outperforms the state-of-the-art that uses the single-layer sparse coding. The major contributions of this proposed technique include (1) a new unsupervised feature learning approach to generate feature representation for very high-resolution satellite imagery, (2) the first multipath sparse coding that is used for scene classification in very high-resolution satellite imagery, (3) a simple low-level feature descriptor instead of many particularly designed low-level descriptor

  9. Finite-element solution of the coupled-channel Schrödinger equation using high-order accuracy approximations

    Science.gov (United States)

    Abrashkevich, A. G.; Abrashkevich, D. G.; Kaschiev, M. S.; Puzynin, I. V.

    1995-01-01

    The finite element method (FEM) is applied to solve the bound state (Sturm-Liouville) problem for systems of ordinary linear second-order differential equations. The convergence, accuracy and the range of applicability of the high-order FEM approximations (up to tenth order) are studied systematically on the basis of numerical experiments for a wide set of quantum-mechanical problems. The analytical and tabular forms of giving the coefficients of differential equations are considered. The Dirichlet and Neumann boundary conditions are discussed. It is shown that the use of the FEM high-order accuracy approximations considerably increases the accuracy of the FE solutions with substantial reduction of the requirements on the computational resources. The results of the FEM calculations for various quantum-mechanical problems dealing with different types of potentials used in atomic and molecular calculations (including the hydrogen atom in a homogeneous magnetic field) are shown to be well converged and highly accurate.

  10. CLASSIFICATION OF ORTHOGNATHIC SURGERY PATIENTS INTO LOW AND HIGH BLEEDING RISK GROUPS USING THROMBELASTOGRAPHY

    DEFF Research Database (Denmark)

    Elenius Madsen, Daniel

    2012-01-01

    Title: CLASSIFICATION OF ORTHOGNATHIC SURGERY PATIENTS INTO LOW AND HIGH BLEEDING RISK GROUPS USING THROMBELASTOGRAPHY Objectives: Orthognathic surgery involves surgical manipulation of jaw and face skeletal structure. A subgroup of patients undergoing orthognathic surgery suffers from excessive ...... to their bleeding risk. This valuable knowledge will be useful with regard to optimization of patient safety, staff composition and transfusion preparations. This pilot study included only 41 patients, and further studies are needed to consolidate the observations done....

  11. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  12. Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.

    Directory of Open Access Journals (Sweden)

    Christopher Y Park

    Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics

  13. Autotaxin activity has a high accuracy to diagnose intrahepatic cholestasis of pregnancy.

    Science.gov (United States)

    Kremer, Andreas E; Bolier, Ruth; Dixon, Peter H; Geenes, Victoria; Chambers, Jenny; Tolenaars, Dagmar; Ris-Stalpers, Carrie; Kaess, Bernhard M; Rust, Christian; van der Post, Joris A; Williamson, Catherine; Beuers, Ulrich; Oude Elferink, Ronald P J

    2015-04-01

    Intrahepatic cholestasis of pregnancy (ICP) is defined by pruritus, elevated total fasting serum bile salts (TBS) and transaminases, and an increased risk of adverse fetal outcome. An accurate diagnostic marker is needed. Increased serum autotaxin correlates with cholestasis-associated pruritus. We aimed at unraveling the diagnostic accuracy of autotaxin in ICP. Serum samples and placental tissue were collected from 44 women with uncomplicated pregnancies and 105 with pruritus and/or elevated serum transaminases. Autotaxin serum levels were quantified enzymatically and by Western blotting, autotaxin gene expression by quantitative PCR. Serum autotaxin was increased in ICP (mean ± SD: 43.5 ± 18.2 nmol ml(-1)min(-1), n=55, ppregnancy (16.8 ± 6.7 nmol ml(-1)min(-1), n=33), pre-eclampsia complicated by HELLP-syndrome (16.8 ± 8.9 nmol ml(-1)min(-1), n=17), and pregnant controls (19.6 ± 5.7 nmol ml(-1)min(-1), n=44). Longitudinal analysis during pregnancy revealed a marked rise in serum autotaxin with onset of ICP-related pruritus. Serum autotaxin was increased in women taking oral contraceptives. Increased serum autotaxin during ICP was not associated with increased autotaxin mRNA in placenta. With a cut-off value of 27.0 nmol ml(-1)min(-1), autotaxin had an excellent sensitivity and specificity in distinguishing ICP from other pruritic disorders or pre-eclampsia/HELLP-syndrome. Serum autotaxin displayed no circadian rhythm and was not influenced by food intake. Increased serum autotaxin activity represents a highly sensitive, specific and robust diagnostic marker of ICP, distinguishing ICP from other pruritic disorders of pregnancy and pregnancy-related liver diseases. Pregnancy and oral contraception increase serum autotaxin to a much lesser extent than ICP. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  14. High accuracy solution of bi-directional wave propagation in continuum mechanics

    Science.gov (United States)

    Mulloth, Akhil; Sawant, Nilesh; Haider, Ijlal; Sharma, Nidhi; Sengupta, Tapan K.

    2015-10-01

    Solution of partial differential equations by numerical method is strongly affected due to numerical errors, which are caused mainly by deviation of numerical dispersion relation from the physical dispersion relation. To quantify and control such errors and obtain high accuracy solutions, we consider a class of problems which involve second derivative of unknowns with respect to time. Here, we analyse numerical metrics such as the numerical group velocity, numerical phase speed and the numerical amplification factor for different methods in solving the model bi-directional wave equation (BDWE). Such equations can be solved directly, for example, by Runge-Kutta-Nyström (RKN) method. Alternatively, the governing equation can be converted to a set of first order in time equations and then using four-stage fourth order Runge-Kutta (RK4) method for time integration. Spatial discretisation considered are the classical second and fourth order central difference schemes, along with Lele's central compact scheme for evaluating second derivatives. In another version, we have used Lele's scheme for evaluating first derivatives twice to obtain the second derivative. As BDWE represents non-dissipative, non-dispersive dynamics, we also consider the canonical problem of linearised rotating shallow water equation (LRSWE) in a new formulation involving second order derivative in time, which represents dispersive waves along with a stationary mode. The computations of LRSWE with RK4 and RKN methods for temporal discretisation and Lele's compact schemes for spatial discretisation are compared with computations performed with RK4 method for time discretisation and staggered compact scheme (SCS) for spatial discretisation by treating it as a set of three equations as reported in Rajpoot et al. (2012) [1].

  15. In-depth, high-accuracy proteomics of sea urchin tooth organic matrix

    Directory of Open Access Journals (Sweden)

    Mann Matthias

    2008-12-01

    Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly

  16. High Resolution Ice Surface of the Ross Ice Shelf: Accuracy and Links to Basal Processes

    Science.gov (United States)

    Starke, S. E.

    2015-12-01

    We use airborne laser altimetry data from IcePod and IceBridge to map the surface across the Ross Ice Shelf in Antarctica. Laser altimetry and radar data is analyzed from the IcePod 2014 and 2015 field campaigns as well as IceBridge 2013. Icepod is a multi sensor suite that includes ice penetrating radars, a swath scanning laser, visible and IR cameras as well as GPS mounted on a LC-130. Using shallow ice radar data from both IcePod and IceBridge we identify the base of the ice shelf. Across the shelf we observe distinct areas of high reflectivity in the radar data suggesting basal crevassing. In some regions, the basal reflector is not well defined. Laser altimetry profiles correlate surface morphology with features at the base including basal crevasses and marine ice formed by freezing on to the base of the ice shelf. Building Digital Elevation Models (DEMs) from the laser altimetry data, we investigate the relationship between the surface expressions of these ice shelf dynamics including thickness changes, potential sites of marine ice at the base and basal morphology in regions where a well defined basal reflector does not exist in the radar profiles. We present accuracy of the IcePod laser altimetry dataset using ground control points and GPS grids from Greenland and Antarctica as well as Photogrammetric DEMs. Our laser altimetry analysis resolves sub-meter surface features which, combined with coincident radar, provides a link between basal processes and their surface expressions.

  17. The research of digital circuit system for high accuracy CCD of portable Raman spectrometer

    Science.gov (United States)

    Yin, Yu; Cui, Yongsheng; Zhang, Xiuda; Yan, Huimin

    2013-08-01

    The Raman spectrum technology is widely used for it can identify various types of molecular structure and material. The portable Raman spectrometer has become a hot direction of the spectrometer development nowadays for its convenience in handheld operation and real-time detection which is superior to traditional Raman spectrometer with heavy weight and bulky size. But there is still a gap for its measurement sensitivity between portable and traditional devices. However, portable Raman Spectrometer with Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) technology can enhance the Raman signal significantly by several orders of magnitude, giving consideration in both measurement sensitivity and mobility. This paper proposed a design and implementation of driver and digital circuit for high accuracy CCD sensor, which is core part of portable spectrometer. The main target of the whole design is to reduce the dark current generation rate and increase signal sensitivity during the long integration time, and in the weak signal environment. In this case, we use back-thinned CCD image sensor from Hamamatsu Corporation with high sensitivity, low noise and large dynamic range. In order to maximize this CCD sensor's performance and minimize the whole size of the device simultaneously to achieve the project indicators, we delicately designed a peripheral circuit for the CCD sensor. The design is mainly composed with multi-voltage circuit, sequential generation circuit, driving circuit and A/D transition parts. As the most important power supply circuit, the multi-voltage circuits with 12 independent voltages are designed with reference power supply IC and set to specified voltage value by the amplifier making up the low-pass filter, which allows the user to obtain a highly stable and accurate voltage with low noise. What's more, to make our design easy to debug, CPLD is selected to generate sequential signal. The A/D converter chip consists of a correlated

  18. Classification of refractory ceramic fibres and repercussions on high temperature insulation; La classification des fibres ceramiques refractaires et ses consequences sur l'isolation haute temperature

    Energy Technology Data Exchange (ETDEWEB)

    Class, Ph. [Thermal Ceramics de France (France)

    2000-09-01

    1999 saw the implementation of the European directive 97/69/CE on the classification of man made vitreous (silicate) fibres. This major change and its extent may be difficult to understand but is of great importance to all industries using such materials. The author, expert in Health, Safety and Environment at Thermal Ceramics Europe and representative of the European Ceramic Fibre Industry Association (ECFIA), develops this classification here, giving emphasis to the repercussions it will have on refractory ceramic fibres and high temperature insulating wools. (author)

  19. HIGH-ACCURACY BAND TO BAND REGISTRATION METHOD FOR MULTI-SPECTRAL IMAGES OF HJ-1A/B

    Institute of Scientific and Technical Information of China (English)

    Lu Hao; Liu Tuanjie; Zhao Haiqing

    2012-01-01

    Band-to-band registration accuracy is an important parameter of multispectral data.A novel band-to-band registration approach with high precision is proposed for the multi-spectral images of HJ-1A/B.Firstly,the main causes resulted in misregistration are analyzed,and a high-order polynomial model is proposed.Secondly,a phase fringe filtering technique is employed to Phase Correlation Method based on Singular Value Decomposition (SVD-PCM) for reducing the noise in phase difference matrix.Then,experiments are carried out to build nonlinear registration models,and images of green band and red band are aligned to blue band with an accuracy of 0.1 pixels,while near infrared band with an accuracy of 0.2 pixels.

  20. High-Accuracy HLA Type Inference from Whole-Genome Sequencing Data Using Population Reference Graphs.

    Directory of Open Access Journals (Sweden)

    Alexander T Dilthey

    2016-10-01

    Full Text Available Genetic variation at the Human Leucocyte Antigen (HLA genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG framework. First, we construct a PRG for 46 (mostly HLA genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1 and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data. Of 158 alleles tested, we correctly infer 157 alleles (99.4%. We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30-250 CPU hours per sample remain a

  1. High-Accuracy HLA Type Inference from Whole-Genome Sequencing Data Using Population Reference Graphs.

    Science.gov (United States)

    Dilthey, Alexander T; Gourraud, Pierre-Antoine; Mentzer, Alexander J; Cereb, Nezih; Iqbal, Zamin; McVean, Gil

    2016-10-01

    Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30-250 CPU hours per sample) remain a significant

  2. Accuracy analysis of continuous deformation monitoring using BeiDou Navigation Satellite System at middle and high latitudes in China

    Science.gov (United States)

    Jiang, Weiping; Xi, Ruijie; Chen, Hua; Xiao, Yugang

    2017-02-01

    As BeiDou Navigation Satellite System (BDS) has been operational in the whole Asia-Pacific region, it means a new GNSS system with a different satellite orbit structure will become available for deformation monitoring in the future. Conversely, GNSS deformation monitoring data are always processed with a regular interval to form displacement time series for deformation analysis, where the interval can neither be too long from the time perspective nor too short from the precision of determined displacements angle. In this paper, two experimental platforms were designed, with one being at mid-latitude and another at higher latitude in China. BDS data processing software was also developed for investigating the accuracy of continuous deformation monitoring using current in-orbit BDS satellites. Data over 20 days at both platforms were obtained and were processed every 2, 4 and 6 h to generate 3 displacement time series for comparison. The results show that with the current in-orbit BDS satellites, in the mid-latitude area it is easy to achieve accuracy of 1 mm in horizontal component and 2-3 mm in vertical component; the accuracy could be further improved to approximately 1 mm in both horizontal and vertical directions when combined BDS/GPS measurements are employed. At higher latitude, however, the results are not as good as expected due to poor satellite geometry, even the 6 h solutions could only achieve accuracy of 4-6 and 6-10 mm in horizontal and vertical components, respectively, which implies that it may not be applicable to very high-precision deformation monitoring at high latitude using the current BDS. With the integration of BDS and GPS observations, however, in 4-h session, the accuracy can achieve 2 mm in horizontal component and 4 mm in vertical component, which would be an optimal choice for high-accuracy structural deformation monitoring at high latitude.

  3. Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.

    Science.gov (United States)

    Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G

    2012-11-13

    The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.

  4. Analysis of the plasmodium falciparum proteome by high-accuracy mass spectrometry

    DEFF Research Database (Denmark)

    Lasonder, Edwin; Ishihama, Yasushi; Andersen, Jens S;

    2002-01-01

    -accuracy (average deviation less than 0.02 Da at 1,000 Da) mass spectrometric proteome analysis of selected stages of the human malaria parasite Plasmodium falciparum. The analysis revealed 1,289 proteins of which 714 proteins were identified in asexual blood stages, 931 in gametocytes and 645 in gametes. The last...

  5. Literature survey of high-impact journals revealed reporting weaknesses in abstracts of diagnostic accuracy studies

    NARCIS (Netherlands)

    Korevaar, Daniël A; Cohen, Jérémie F; Hooft, Lotty; Bossuyt, Patrick M M

    2015-01-01

    OBJECTIVES: Informative journal abstracts are crucial for the identification and initial appraisal of studies. We aimed to evaluate the informativeness of abstracts of diagnostic accuracy studies. STUDY DESIGN AND SETTING: PubMed was searched for reports of studies that had evaluated the diagnostic

  6. The effect of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players.

    Science.gov (United States)

    Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan

    2013-01-01

    Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis

  7. Two and three-dimensional computed tomography for the classification and management of distal humeral fractures - Evaluation of reliability and diagnostic accuracy

    NARCIS (Netherlands)

    J. Doornberg; A. Lindenhovius; P. Kloen; C.N. van Dijk; D. Zurakowski; D. Ring

    2006-01-01

    Background: Complex fractures of the distal part of the humerus can be difficult to characterize on plain radiographs and two-dimensional computed tomography scans. We tested the hypothesis that three-dimensional reconstructions of computed tomography scans improve the reliability and accuracy of fr

  8. Identification of Children with Language Impairment: Investigating the Classification Accuracy of the MacArthur-Bates Communicative Development Inventories, Level III

    Science.gov (United States)

    Skarakis-Doyle, Elizabeth; Campbell, Wenonah; Dempsey, Lynn

    2009-01-01

    Purpose: This study tested the accuracy with which the MacArthur-Bates Communicative Development Inventories, Level III (CDI-III), a parent report measure of language ability, discriminated children with language impairment from those developing language typically. Method: Parents of 58 children, 49 with typically developing language (age 30 to 42…

  9. A Review of Land and Stream Classifications in Support of Developing a National Ordinary High Water Mark (OHWM) Classification

    Science.gov (United States)

    2014-08-01

    LRRs and MLRAs, developed for regionalization of the Wetland Delineation Manual. The Alaska Region and the Hawaii and Pacific Islands Region are not...province level, landscapes are di- vided into regions of similar hydrologic, erosional, and tectonic processes. This level describes how climate, geologic...potential application of the land–water classification approaches reviewed below is to determine the scale at which OHWM indicators show significant

  10. High-accuracy real-time automatic thresholding for centroid tracker

    Science.gov (United States)

    Zhang, Ye; Wang, Yanjie

    2006-01-01

    Many of the video image trackers today use the centroid as the tracking point. In engineering, a target's centroid is computed from a binary image to reduce the processing time. Hence thresholding of gray level image to binary image is a decisive step in centroid tracking. How to choose the feat thresholds in clutter is still an intractability problem unsolved today. This paper introduces a high-accuracy real-time automatic thresholding method for centroid tracker. It works well for variety types of target tracking in clutter. The core of this method is to get the entire information contained in the histogram, such as the number of the peaks, their height, position and other properties in the histogram. Combine with this histogram analysis; we can get several key pairs of peaks which can include the target and the background around it and use the method of Otsu to get intensity thresholds from them. According to the thresholds, we can gain the binary image and get the centroid from it. To track the target, the paper also suggests subjoining an eyeshot-window, just like our eyes focus on a target, we will not miss it unless it is out of our eyeshot, the impression will help us to extract the target in clutter and track it and we will wait its emergence since it has been covered. To obtain the impression, the paper offers a idea comes from the method of Snakes; it give a great help for us to get a glancing size, so that we can compare the size of the object in the current frame with the former. If the change is little, we consider the object has been tracked well. Otherwise, if the change is bigger than usual, we should analyze the inflection in the histogram to find out what happened to the object. In general, what we have to do is turning the analysis into codes for the tracker to determine a feat threshold. The paper will show the steps in detail. The paper also discusses the hardware architecture which can meet the speed requirement.

  11. High-accuracy, high-resolution gravity profiles from 2 years of the Geosat Exact Repeat Mission

    Science.gov (United States)

    Sandwell, David T.; Mcadoo, David C.

    1990-01-01

    Satellite altimeter data from the first 44 repeat cycles (2 years) of the Geosat Exact Repeat Mission (EWRM) were averaged to improve accuracy, resolution and coverage of the marine gravity field. Individual 17-day repeat cycles were first edited and differentiated, resulting in the along-track vertical deflection (i.e., gravity disturbance). To increase the signal-to-noise ratio, 44 of these cycles were then averaged to form a single highly accurate vertical deflection profile. The largest contribution to the vertical deflection error is short-wavelength altimeter noise and longer-wavelength oceanographic variability; the combined noise level is typically 6 microrad. Both types of noise are reduced by averaging many repeat cycles. Over most ocean areas the uncertainty of the average profile is less than 1 microrad which corresponds to 1 mgal of along-track gravity disturbance. However, in areas of seasonal ice coverage, its uncertainty can exceed 5 microrad. To assess the resolution of individual and average Geosat gravity profiles, the cross-spectral analysis technique was applied to repeat profiles. Individual Geosat repeat cycles are coherent (greater than 0.5) for wavelengths greater than about 30 km and become increasingly incoherent at shorter wavelengths.

  12. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    Science.gov (United States)

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  13. Maximizing the Diversity of Ensemble Random Forests for Tree Genera Classification Using High Density LiDAR Data

    Directory of Open Access Journals (Sweden)

    Connie Ko

    2016-08-01

    Full Text Available Recent research into improving the effectiveness of forest inventory management using airborne LiDAR data has focused on developing advanced theories in data analytics. Furthermore, supervised learning as a predictive model for classifying tree genera (and species, where possible has been gaining popularity in order to minimize this labor-intensive task. However, bottlenecks remain that hinder the immediate adoption of supervised learning methods. With supervised classification, training samples are required for learning the parameters that govern the performance of a classifier, yet the selection of training data is often subjective and the quality of such samples is critically important. For LiDAR scanning in forest environments, the quantification of data quality is somewhat abstract, normally referring to some metric related to the completeness of individual tree crowns; however, this is not an issue that has received much attention in the literature. Intuitively the choice of training samples having varying quality will affect classification accuracy. In this paper a Diversity Index (DI is proposed that characterizes the diversity of data quality (Qi among selected training samples required for constructing a classification model of tree genera. The training sample is diversified in terms of data quality as opposed to the number of samples per class. The diversified training sample allows the classifier to better learn the positive and negative instances and; therefore; has a higher classification accuracy in discriminating the “unknown” class samples from the “known” samples. Our algorithm is implemented within the Random Forests base classifiers with six derived geometric features from LiDAR data. The training sample contains three tree genera (pine; poplar; and maple and the validation samples contains four labels (pine; poplar; maple; and “unknown”. Classification accuracy improved from 72.8%; when training samples were

  14. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)

    2014-07-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.

  15. Real-time displacement measurement with large range and high accuracy using sinusoidal phase modulating laser diode interferometer

    Institute of Scientific and Technical Information of China (English)

    Guotian He; Xiangzhao Wang; Aijun Zeng; Feng Tang; Bingjie Huang

    2007-01-01

    To resolve the conflict of large measurement range and high accuracy in the existing real-time displacement measurement laser diode (LD) interferometers, a novel real-time displacement measurement LD interferometry is proposed and its measurement principle is analyzed. By use of a new phase demodulation algorithm and a new phase compensation lgorithm of real-time phase unwrapping, the measurement accuracy is improved, and the measurement range is enlarged to a few wavelengths. In experiments, the peak-to-peak amplitude of the speaker vibration was 2361.7 nm, and the repeatability was 2.56 nm. The measurement time was less than 26μs.

  16. Performance of the majority voting rule in solving the density classification problem in high dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Soto, Jose Manuel [Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Calzada Solidaridad entronque Paseo a la Bufa, Zacatecas, Zac. (Mexico); Fuks, Henryk, E-mail: jmgomezgoo@gmail.com, E-mail: hfuks@brocku.ca [Department of Mathematics, Brock University, St. Catharines, ON (Canada)

    2011-11-04

    The density classification problem (DCP) is one of the most widely studied problems in the theory of cellular automata. After it was shown that the DCP cannot be solved perfectly, the research in this area has been focused on finding better rules that could solve the DCP approximately. In this paper, we argue that the majority voting rule in high dimensions can achieve high performance in solving the DCP, and that its performance increases with dimension. We support this conjecture with arguments based on the mean-field approximation and direct computer simulations. (paper)

  17. A survey on classification of maintenance fund for high rise residential building in Klang Valley

    Science.gov (United States)

    Wahab, Siti Rashidah Hanum Abd; Ani, Adi Irfan Che; Sairi, Ahmad; Tawil, Norngainy Mohd; Razak, Muhd Zulhanif Abd

    2016-08-01

    High-rise residential building is a type of housing that has multi-dwelling units built on the same land. This type of housing has become popular each year in urban area due to the increasing cost of land. Unfortunately, there are several issues occurred in managing high rise residential building especially in maintenance fund. Thus, distribution of maintenance fund need to be clarified in order to make it well organised. The purpose of this paper is to identify the classification of maintenance fund distribution at high rise residential building. The survey was done on 170 high-rise residential schemes by using stratified random sampling technique. The scope of this research is within Klang Valley area. This area is rapidly developed with high-rise residential building. The result, there are five classification of maintenance fund identified in managing high-rise residential building scheme namely, management fund for administration and utilities, maintenance fund for exclusive facilities, maintenance fund for basic facilities, maintenance fund for support facilities and management sinking fund.

  18. The effects of high-frequency oscillations in hippocampal electrical activities on the classification of epileptiform events using artificial neural networks

    Science.gov (United States)

    Chiu, Alan W. L.; Jahromi, Shokrollah S.; Khosravani, Houman; Carlen, Peter L.; Bardakjian, Berj L.

    2006-03-01

    The existence of hippocampal high-frequency electrical activities (greater than 100 Hz) during the progression of seizure episodes in both human and animal experimental models of epilepsy has been well documented (Bragin A, Engel J, Wilson C L, Fried I and Buzsáki G 1999 Hippocampus 9 137-42 Khosravani H, Pinnegar C R, Mitchell J R, Bardakjian B L, Federico P and Carlen P L 2005 Epilepsia 46 1-10). However, this information has not been studied between successive seizure episodes or utilized in the application of seizure classification. In this study, we examine the dynamical changes of an in vitro low Mg2+ rat hippocampal slice model of epilepsy at different frequency bands using wavelet transforms and artificial neural networks. By dividing the time-frequency spectrum of each seizure-like event (SLE) into frequency bins, we can analyze their burst-to-burst variations within individual SLEs as well as between successive SLE episodes. Wavelet energy and wavelet entropy are estimated for intracellular and extracellular electrical recordings using sufficiently high sampling rates (10 kHz). We demonstrate that the activities of high-frequency oscillations in the 100-400 Hz range increase as the slice approaches SLE onsets and in later episodes of SLEs. Utilizing the time-dependent relationship between different frequency bands, we can achieve frequency-dependent state classification. We demonstrate that activities in the frequency range 100-400 Hz are critical for the accurate classification of the different states of electrographic seizure-like episodes (containing interictal, preictal and ictal states) in brain slices undergoing recurrent spontaneous SLEs. While preictal activities can be classified with an average accuracy of 77.4 ± 6.7% utilizing the frequency spectrum in the range 0-400 Hz, we can also achieve a similar level of accuracy by using a nonlinear relationship between 100-400 Hz and <4 Hz frequency bands only.

  19. Preprocessing for classification of thermograms in breast cancer detection

    Science.gov (United States)

    Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz

    2016-09-01

    Performance of binary classification of breast cancer suffers from high imbalance between classes. In this article we present the preprocessing module designed to negate the discrepancy in training examples. Preprocessing module is based on standardization, Synthetic Minority Oversampling Technique and undersampling. We show how each algorithm influences classification accuracy. Results indicate that described module improves overall Area Under Curve up to 10% on the tested dataset. Furthermore we propose other methods of dealing with imbalanced datasets in breast cancer classification.

  20. Classification and Clinical Management of Variants of Uncertain Significance in High Penetrance Cancer Predisposition Genes.

    Science.gov (United States)

    Moghadasi, Setareh; Eccles, Diana M; Devilee, Peter; Vreeswijk, Maaike P G; van Asperen, Christi J

    2016-04-01

    In 2008, the International Agency for Research on Cancer (IARC) proposed a system for classifying sequence variants in highly penetrant breast and colon cancer susceptibility genes, linked to clinical actions. This system uses a multifactorial likelihood model to calculate the posterior probability that an altered DNA sequence is pathogenic. Variants between 5%-94.9% (class 3) are categorized as variants of uncertain significance (VUS). This interval is wide and might include variants with a substantial difference in pathogenicity at either end of the spectrum. We think that carriers of class 3 variants would benefit from a fine-tuning of this classification. Classification of VUS to a category with a defined clinical significance is very important because for carriers of a pathogenic mutation full surveillance and risk-reducing surgery can reduce cancer incidence. Counselees who are not carriers of a pathogenic mutation can be discharged from intensive follow-up and avoid unnecessary risk-reducing surgery. By means of examples, we show how, in selected cases, additional data can lead to reclassification of some variants to a different class with different recommendations for surveillance and therapy. To improve the clinical utility of this classification system, we suggest a pragmatic adaptation to clinical practice.

  1. Large patch convolutional neural networks for the scene classification of high spatial resolution imagery

    Science.gov (United States)

    Zhong, Yanfei; Fei, Feng; Zhang, Liangpei

    2016-04-01

    The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.

  2. High-accuracy extrapolated ab initio thermochemistry. II. Minor improvements to the protocol and a vital simplification

    Science.gov (United States)

    Bomble, Yannick J.; Vázquez, Juana; Kállay, Mihály; Michauk, Christine; Szalay, Péter G.; Császár, Attila G.; Gauss, Jürgen; Stanton, John F.

    2006-08-01

    The recently developed high-accuracy extrapolated ab initio thermochemistry method for theoretical thermochemistry, which is intimately related to other high-precision protocols such as the Weizmann-3 and focal-point approaches, is revisited. Some minor improvements in theoretical rigor are introduced which do not lead to any significant additional computational overhead, but are shown to have a negligible overall effect on the accuracy. In addition, the method is extended to completely treat electron correlation effects up to pentuple excitations. The use of an approximate treatment of quadruple and pentuple excitations is suggested; the former as a pragmatic approximation for standard cases and the latter when extremely high accuracy is required. For a test suite of molecules that have rather precisely known enthalpies of formation {as taken from the active thermochemical tables of Ruscic and co-workers [Lecture Notes in Computer Science, edited by M. Parashar (Springer, Berlin, 2002), Vol. 2536, pp. 25-38; J. Phys. Chem. A 108, 9979 (2004)]}, the largest deviations between theory and experiment are 0.52, -0.70, and 0.51kJmol-1 for the latter three methods, respectively. Some perspective is provided on this level of accuracy, and sources of remaining systematic deficiencies in the approaches are discussed.

  3. 基于图像分类的矿物含量测定及精度评价%Mineral contents determination and accuracy evaluation based on classification of petrographic images

    Institute of Scientific and Technical Information of China (English)

    叶润青; 牛瑞卿; 张良培; 易顺华

    2011-01-01

    There are many human errors and lack accuracy evaluation existing in the mineral contents determination for traditional methods.A new approach is proposed for mineral contents determination and accuracy evaluation based on images classification.The method is firstly to divide the petrographic images into different mineral classes by using image classification algorithms,and then to obtain the mineral contents through pixel statistic,finally contents accuracy evaluation is carried out by Confusion Matrix(CM).According to the spectral and texture features of the petrographic images,two approaches were proposed for mineral contents determination.One is for the petrographic images with simple texture and large color distinction is to adopt direct classification.The experiment of granite photos shows that the supervised classifiers are better than the unsupervised ones in accuracy,and the Maximum Likelihood Classifier(MLC) results with the highest accuracy of 94.25%;The other method is for the petrographic images with complex mineral texture(such as interference colors,twins,etc.),an object-oriented Multi-resolution Segmentation(MS)algorithm is employed for images segmentation before mineral classification.The muscovite monzogranite microscope image experiment shows the content estimated accuracy is 94.85%.%针对传统矿物含量测定中存在人为误差、缺乏精度评价等问题,提出了基于图像分类的矿物含量测定及精度评价方法,该方法通过统计分类后图像中每种矿物的像元数量测定矿物含量,并采用混淆矩阵评价含量测定精度.根据岩石图像的光谱和纹理特征,提出了两种基本的矿物含量测定方式:1)对于纹理简单、矿物光谱区分度大的岩石图像,采用直接分类方式测定矿物含量,花岗岩手标本照片矿物分类实验表明监督分类效果优于非监督分类,且监督分类中最大似然法分类(MLC)的精度最

  4. SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation

    Directory of Open Access Journals (Sweden)

    Euiho Kim

    2017-09-01

    Full Text Available In the Federal Aviation Administration’s (FAA performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0–77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment.

  5. Friction compensation design based on state observer and adaptive law for high-accuracy positioning system

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Friction is one of the main factors that affect the positioning accuracy of motion system. Friction compensation based on friction model is usually adopted to eliminate the nonlinear effect of friction. This paper presents a proportional-plus-derivative (PD) feedback controller with a friction compensator based on LuGre friction model. We also design a state observer to observe the unknown state of LuGre friction model, and adopt a parameter adaptive law and off-line approximation to estimate the parameters of LuGre friction model. Comparative experiments are carried out among our proposed controller, PD controller with friction compensation based on classical friction model, and PD controller without friction compensation. Experimental results demonstrate that our proposed controller can achieve better performance, especially higher positioning accuracy.

  6. Ways to help Chinese Students in Senior High School improve language accuracy in writing

    Institute of Scientific and Technical Information of China (English)

    潘惠红

    2015-01-01

    <正>Introduction In Chinese ELT(English language teaching),as in other countries,both fluency and accuracy are considered important either in the teaching or assessment of writing.In this respect,the last decade has seen reforms in the College Entrance Examination in Guangdong Province.With two writing tasks being set as assessment,task one requires students to summarise Chinese language information into five English sentences while the

  7. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Directory of Open Access Journals (Sweden)

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  8. High-accuracy current measurement with low-cost shunts by means of dynamic error correction

    OpenAIRE

    Weßkamp, Patrick; Melbert, Joachim

    2016-01-01

    Measurement of electrical current is often performed by using shunt resistors. Thermal effects due to self-heating and ambient temperature variation limit the achievable accuracy, especially if low-cost shunt resistors with increased temperature coefficients are utilized. In this work, a compensation method is presented which takes static and dynamic temperature drift effects into account and provides a significant reduction of measurement error. A thermal model of the shunt...

  9. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  10. High accuracy integrated global positioning system/inertial navigation system LDRD: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Owen, T.E.; Meindl, M.A.; Fellerhoff, J.R.

    1997-03-01

    This report contains the results of a Sandia National Laboratories Directed Research and Development (LDRD) program to investigate the integration of Global Positioning System (GPS) and inertial navigation system (INS) technologies toward the goal of optimizing the navigational accuracy of the combined GPSANS system. The approach undertaken is to integrate the data from an INS, which has long term drifts, but excellent short term accuracy, with GPS carrier phase signal information, which is accurate to the sub-centimeter level, but requires continuous tracking of the GPS signals. The goal is to maintain a sub-meter accurate navigation solution while the vehicle is in motion by using the GPS measurements to estimate the INS navigation errors and then using the refined INS data to aid the GPS carrier phase cycle slip detection and correction and bridge dropouts in the GPS data. The work was expanded to look at GPS-based attitude determination, using multiple GPS receivers and antennas on a single platform, as a possible navigation aid. Efforts included not only the development of data processing algorithms and software, but also the collection and analysis of GPS and INS flight data aboard a Twin Otter aircraft. Finally, the application of improved navigation system accuracy to synthetic aperture radar (SAR) target location is examined.

  11. THE EFFECT OF MODERATE AND HIGH-INTENSITY FATIGUE ON GROUNDSTROKE ACCURACY IN EXPERT AND NON-EXPERT TENNIS PLAYERS

    Directory of Open Access Journals (Sweden)

    Mark Lyons

    2013-06-01

    Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on

  12. Classification accuracy of the Millon Clinical Multiaxial Inventory-III modifier indices in the detection of malingering in traumatic brain injury.

    Science.gov (United States)

    Aguerrevere, Luis E; Greve, Kevin W; Bianchini, Kevin J; Ord, Jonathan S

    2011-06-01

    The present study used criterion groups validation to determine the ability of the Millon Clinical Multiaxial Inventory-III (MCMI-III) modifier indices to detect malingering in traumatic brain injury (TBI). Patients with TBI who met criteria for malingered neurocognitive dysfunction (MND) were compared to those who showed no indications of malingering. Data were collected from 108 TBI patients referred for neuropsychological evaluation. Base rate (BR) scores were used for MCMI-III modifier indices: Disclosure, Desirability, and Debasement. Malingering classification was based on the Slick, Sherman, and Iverson (1999) criteria for MND. TBI patients were placed in one of three groups: MND (n = 55), not-MND (n = 26), or Indeterminate (n = 26).The not-MND group had lower modifier index scores than the MND group. At scores associated with a 4% false-positive (FP) error rate, sensitivity was 47% for Disclosure, 51% for Desirability, and 55% for Debasement. Examination of joint classification analysis demonstrated 54% sensitivity at cutoffs associated with 0% FP error rate. Results suggested that scores from all MCMI-III modifier indices are useful for identifying intentional symptom exaggeration in TBI. Debasement was the most sensitive of the three indices. Clinical implications are discussed.

  13. A Novel System for the Surgical Staging of Primary High-grade Osteosarcoma: The Birmingham Classification.

    Science.gov (United States)

    Jeys, Lee M; Thorne, Chris J; Parry, Michael; Gaston, Czar Louie L; Sumathi, Vaiyapuri P; Grimer, J Robert

    2017-03-01

    Chemotherapy response and surgical margins have been shown to be associated with the risk of local recurrence in patients with osteosarcoma. However, existing surgical staging systems fail to reflect the response to chemotherapy or define an appropriate safe metric distance from the tumor that will allow complete excision and closely predict the chance of disease recurrence. We therefore sought to review a group of patients with primary high-grade osteosarcoma treated with neoadjuvant chemotherapy and surgical resection and analyzed margins and chemotherapy response in terms of local recurrence. (1) What predictor or combination of predictors available to the clinician can be assessed that more reliably predict the likelihood of local recurrence? (2) Can we determine a better predictor of local recurrence-free survival than the currently applied system of surgical margins? (3) Can we determine a better predictor of overall survival than the currently applied system of surgical margins? This retrospective study included all patients with high-grade conventional osteosarcomas without metastasis at diagnosis treated at one center between 1997 and 2012 with preoperative chemotherapy followed by resection or amputation of the primary tumor who were younger than age 50 years with minimum 24-month followup for those still alive. A total of 389 participants matched the inclusion criteria. Univariate log-rank test and multivariate Cox analyses were undertaken to identify predictors of local recurrence-free survival (LRFS). The Birmingham classification was devised on the basis of two stems: the response to chemotherapy (good response = ≥ 90% necrosis; poor response = HR], 9.9; 95% CI, 1.2-82; p = 0.03 versus radical margin HR, 1) and a poor response to neoadjuvant chemotherapy (HR, 3.8; 95% CI, 1.7-8.4; p = 0.001 versus good response HR, 1) were independent risk factors for local recurrence (LR). The best predictor of LR, however, was a combination of margins ≤ 2 mm and

  14. Inversion of High Frequency Acoustic Data for Sediment Properties Needed for the Detection and Classification of UXOs

    Science.gov (United States)

    2015-05-26

    inversion testing. Due to time limitations and to facilitate the deployments of both sonars, the 7125s were deployed using a pole mount on the...FINAL REPORT Inversion of High Frequency Acoustic Data for Sediment Properties Needed for the Detection and Classification of UXOs SERDP...2015 Inversion of High Frequency Acoustic Data for Sediment Properties Needed for the Detection and Classification of UXO’s W912HQ-12-C-0049 MR

  15. Automatic Classification of High Resolution Satellite Imagery - a Case Study for Urban Areas in the Kingdom of Saudi Arabia

    Science.gov (United States)

    Maas, A.; Alrajhi, M.; Alobeid, A.; Heipke, C.

    2017-05-01

    Updating topographic geospatial databases is often performed based on current remotely sensed images. To automatically extract the object information (labels) from the images, supervised classifiers are being employed. Decisions to be taken in this process concern the definition of the classes which should be recognised, the features to describe each class and the training data necessary in the learning part of classification. With a view to large scale topographic databases for fast developing urban areas in the Kingdom of Saudi Arabia we conducted a case study, which investigated the following two questions: (a) which set of features is best suitable for the classification?; (b) what is the added value of height information, e.g. derived from stereo imagery? Using stereoscopic GeoEye and Ikonos satellite data we investigate these two questions based on our research on label tolerant classification using logistic regression and partly incorrect training data. We show that in between five and ten features can be recommended to obtain a stable solution, that height information consistently yields an improved overall classification accuracy of about 5%, and that label noise can be successfully modelled and thus only marginally influences the classification results.

  16. Improving Crop Classification Techniques Using Optical Remote Sensing Imagery, High-Resolution Agriculture Resource Inventory Shapefiles and Decision Trees

    Science.gov (United States)

    Melnychuk, A. L.; Berg, A. A.; Sweeney, S.

    2010-12-01

    Recognition of anthropogenic effects of land use management practices on bodies of water is important for remediating and preventing eutrophication. In the case of Lake Simcoe, Ontario the main surrounding landuse is agriculture. To better manage the nutrient flow into the lake, knowledge of the management of the agricultural land is important. For this basin, a comprehensive agricultural resource inventory is required for assessment of policy and for input into water quality management and assessment tools. Supervised decision tree classification schemes, used in many previous applications, have yielded reliable classifications in agricultural land-use systems. However, when using these classification techniques the user is confronted with numerous data sources. In this study we use a large inventory of optical satellite image products (Landsat, AWiFS, SPOT and MODIS) and ancillary data sources (temporal MODIS-NDVI product signatures, digital elevation models and soil maps) at various spatial and temporal resolutions in a decision tree classification scheme. The sensitivity of the classification accuracy to various products is assessed to identify optimal data sources for classifying crop systems.

  17. Horizontal Positional Accuracy of Google Earth’s High-Resolution Imagery Archive

    Directory of Open Access Journals (Sweden)

    David Potere

    2008-12-01

    Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the Earth’s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the world’s peri-urban areas.

  18. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    Science.gov (United States)

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  19. A High-accuracy Approach to Pronunciation Prediction for Out-of-vocabulary English Word

    Institute of Scientific and Technical Information of China (English)

    WANG Hao; CHEN Gui-lin; XU Liang-xian

    2005-01-01

    Letter-to-Sound conversion is one of the fundamental issues in text-to-speech synthesis. In this paper, we address an approach to automatic prediction of word pronunciation. This approach combines example-based learning and dynamic-programming searching to predict sub-word pronunciation. Word pronunciation is formed by concatenating sub-word pronunciations. We conducted comparative experiments over a large-scale English dictionary. Experimental results show that this approach can achieve accuracy of 70.1%, which outperforms those published results.

  20. High accuracy wavelength locking of a DFB laser using tunable polarization interference filter

    Institute of Scientific and Technical Information of China (English)

    Xiyao Chen(陈曦曜); Jianping Xie(谢建平); Tianpeng Zhao(赵天鹏); Hai Ming(明海); Anting Wang(王安廷); Wencai Huang(黄文财); Liang Lü(吕亮); Lixin Xu(许立新)

    2003-01-01

    A temperature-tunable polarization interference filter (PIF) made of YVO4 crystal has been presented and applied for wavelength locking of a distributed feedback (DFB) semiconductor laser in dense wavelength-division-multiplexing (DWDM) optical communication systems. This new design offers a flexible way to monitor and then lock an operating wavelength of DFB laser to any preselected point without dead spots.The results show that the laser wavelength can be locked with accuracy better than ±0.01 nm with much relaxed requirement on temperature stability of the filter.

  1. High-accuracy mass determination of unstable cesium and barium isotopes

    CERN Document Server

    Ames, F; Beck, D; Bollen, G; De Saint-Simon, M; Jertz, R; Kluge, H J; Kohl, A; König, M; Lunney, M D; Martel, I; Moore, R B; Otto, T; Patyk, Z; Raimbault-Hartmann, H; Rouleau, G; Savard, G; Schark, E; Schwarz, S; Schweikhard, L; Stolzenberg, H; Szerypo, J

    1999-01-01

    Direct mass measurements of short-lived Cs and Ba isotopes have been performed with the tandem Penning trap mass spectrometer ISOLTRAP installed at the on-line isotope separator ISOLDE at CERN. Typically, a mass resolving power of 600 000 and an accuracy of $\\delta \\mbox{m} \\approx 13$ keV have been obtained. The masses of $^{123,124,126}$Ba and $^{122m}$Cs were measured for the first time. A least-squares adjustment has been performed and the experimental masses are compared with theoretical ones, particularly in the frame of a macroscopic-microscopic model.

  2. High-accuracy mass determination of neutron-rich rubidium and strontiumiIsotopes

    CERN Document Server

    Raimbault-Hartmann, H; Beck, D; Bollen, G; De Saint-Simon, M; Kluge, H J; König, M; Moore, R B; Schwarz, S; Savard, G; Szerypo, J

    2002-01-01

    The penning-trap mass spectrometer ISOLTRAP, installed at the on-line isotope separator ISOLDE at CERN, has been used to measure atomic masses of $^{88,89,90m,91,92,93,94}$Rb and $^{91- 95}$Sr. Using a resolving power of R $\\!\\scriptstyle\\approx$1 million a mass accuracy of typically 10 keV was achieved for all nuclides. Discrepancies with older data are analyzed and discussed, leading to corrections to those data. Together with the present ISOLTRAP data these corrected data have been used in the general mass adjustment.

  3. High-Accuracy Measurements of the Centre of Gravity of Avalanches in Proportional Chambers

    Science.gov (United States)

    Charpak, G.; Jeavons, A.; Sauli, F.; Stubbs, R.

    1973-09-24

    In a multiwire proportional chamber the avalanches occur close to the anode wires. The motion of the positive ions in the large electric fields at the vicinity of the wires induces fast-rising positive pulses on the surrounding electrodes. Different methods have been developed in order to determine the position of the centre of the avalanches. In the method we describe, the centre of gravity of the pulse distribution is measured directly. It seems to lead to an accuracy which is limited only by the stability of the spatial distribution of the avalanches generated by the process being measured.

  4. High-Accuracy Tracking Control of Robot Manipulators Using Time Delay Estimation and Terminal Sliding Mode

    Directory of Open Access Journals (Sweden)

    Maolin Jin

    2011-09-01

    Full Text Available A time delay estimation based general framework for trajectory tracking control of robot manipulators is presented. The controller consists of three elements: a time‐delay‐estimation element that cancels continuous nonlinearities of robot dynamics, an injecting element that endows desired error dynamics, and a correcting element that suppresses residual time delay estimation error caused by discontinuous nonlinearities. Terminal sliding mode is used for the correcting element to pursue fast convergence of the time delay estimation error. Implementation of proposed control is easy because calculation of robot dynamics including friction is not required. Experimental results verify high‐accuracy trajectory tracking of industrial robot manipulators.

  5. High-Accuracy Tracking Using Ultrawideband Signals for Enhanced Safety of Cyclists

    Directory of Open Access Journals (Sweden)

    Davide Dardari

    2017-01-01

    Full Text Available In this paper, an ultrawideband localization system to improve the cyclists’ safety is presented. The architectural solutions proposed consist of tags placed on bikes, whose positions have to be estimated, and anchors, acting as reference nodes, located at intersections and/or on vehicles. The peculiarities of the localization system in terms of accuracy and cost enable its adoption with enhanced risk assessment units situated on the infrastructure/vehicle, depending on the architecture chosen, as well as real-time warning to the road users. Experimental results reveal that the localization error, in both static and dynamic conditions, is below 50 cm in most of the cases.

  6. UAS-SfM for coastal research: Geomorphic feature extraction and land cover classification from high-resolution elevation and optical imagery

    Science.gov (United States)

    Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel

    2017-01-01

    The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.

  7. Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Gangchen Hua

    2017-01-01

    Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.

  8. High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".

    Science.gov (United States)

    Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo

    2004-03-01

    Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.

  9. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland

    Science.gov (United States)

    Lu, Bing; He, Yuhong

    2017-06-01

    Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition

  10. High accuracy microwave frequency measurement based on single-drive dual-parallel Mach-Zehnder modulator

    DEFF Research Database (Denmark)

    Zhao, Ying; Pang, Xiaodan; Deng, Lei

    2011-01-01

    A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications.......A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing...... a two-stage frequency measurement cooperating with digital signal processing. In the experiment, 10GHz measurement range is guaranteed and the average uncertainty of estimated microwave frequency is 5.4MHz, which verifies the measurement accuracy is significantly improved by achieving an unprecedented...

  11. Brief Report: Face Configuration Accuracy and Processing Speed Among Adults with High-Functioning Autism Spectrum Disorders

    OpenAIRE

    Faja, Susan; Webb, Sara Jane; Merkle, Kristen; Aylward, Elizabeth; Dawson, Geraldine

    2008-01-01

    The present study investigates the accuracy and speed of face processing employed by high-functioning adults with autism spectrum disorders (ASDs). Two behavioral experiments measured sensitivity to distances between features and face recognition when performance depended on holistic versus featural information. Results suggest adults with ASD were less accurate, but responded as quickly as controls for both tasks. In contrast to previous findings with children, adults with ASD demonstrated a...

  12. The Effect of Moderate and High-Intensity Fatigue on Groundstroke Accuracy in Expert and Non-Expert Tennis Players

    OpenAIRE

    Mark Lyons; Yahya Al-Nakeeb; Joanne Hankey; Alan Nevill

    2013-01-01

    peer-reviewed Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expe...

  13. A High Accuracy Pedestrian Detection System Combining a Cascade AdaBoost Detector and Random Vector Functional-Link Net

    OpenAIRE

    Zhihui Wang; Sook Yoon; Shan Juan Xie; Yu Lu; Dong Sun Park

    2014-01-01

    In pedestrian detection methods, their high accuracy detection rates are always obtained at the cost of a large amount of false pedestrians. In order to overcome this problem, the authors propose an accurate pedestrian detection system based on two machine learning methods: cascade AdaBoost detector and random vector functional-link net. During the offline training phase, the parameters of a cascade AdaBoost detector and random vector functional-link net are trained by standard dataset. These...

  14. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  15. Fusing visual and clinical information for lung tissue classification in high-resolution computed tomography.

    Science.gov (United States)

    Depeursinge, Adrien; Racoceanu, Daniel; Iavindrasana, Jimison; Cohen, Gilles; Platon, Alexandra; Poletti, Pierre-Alexandre; Müller, Henning

    2010-09-01

    We investigate the influence of the clinical context of high-resolution computed tomography (HRCT) images of the chest on tissue classification. 2D regions of interest in HRCT axial slices from patients affected with an interstitial lung disease are automatically classified into five classes of lung tissue. Relevance of the clinical parameters is studied before fusing them with visual attributes. Two multimedia fusion techniques are compared: early versus late fusion. Early fusion concatenates features in one single vector, yielding a true multimedia feature space. Late fusion consisting of the combination of the probability outputs of two support vector machines. The late fusion scheme allowed a maximum of 84% correct predictions of testing instances among the five classes of lung tissue. This represents a significant improvement of 10% compared to a pure visual-based classification. Moreover, the late fusion scheme showed high robustness to the number of clinical parameters used, which suggests that it is appropriate for mining clinical attributes with missing values in clinical routine. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  16. Towards a magnetic field stabilization at ISOLTRAP for high-accuracy mass measurements on exotic nuclides

    CERN Document Server

    Marie-Jeanne, M; Blaum, K; Djekic, S; Dworschak, M; Hager, U; Herlert, A; Nagy, S; Savreux, R; Schweikhard, L; Stahl, S; Yazidjian, C

    2008-01-01

    The field stability of a mass spectrometer plays a crucial role in the accuracy of mass measurements. In the case of mass determination of short-lived nuclides with a Penning trap, major causes of fluctuations are temperature variations in the vicinity of the trap and pressure changes in the liquid helium cryostat of the superconducting magnet. Thus systems for the temperature and pressure stabilization of the Penning trap mass spectrometer ISOLTRAP at the ISOLDE facility at CERN have been installed. A reduction of the temperature and pressure fluctuations by at least an order of magnitude down to and has been achieved, which corresponds to a relative magnetic field change of ΔB/B=2.7×10-9 and 1.1×10-10, respectively.

  17. High accuracy calculation of the hydrogen negative ion in strong magnetic fields

    Institute of Scientific and Technical Information of China (English)

    Zhao Ji-Jun; Wang Xiao-Feng; Qiao Hao-Xue

    2011-01-01

    Using a full configuration-interaction method with Hylleraas-Gaussian basis function, this paper investigates the 110+, 11(-1)+ and l1(-2)+ states of the hydrogen negative ion in strong magnetic fields. The total energies, electron detachment energies and derivatives of the total energy with respect to the magnetic field are presented as functions of magnetic field over a wide range of field strengths. Compared with the available theoretical data, the accuracy for the energies is enhanced significantly. The field regimes 3 <γ< 4 and 0.02 < 7< 0.05, in which the l1(-l)+ and l1(-2)+states start to become bound, respectively, are also determined based on the calculated electron detachment energies.

  18. Hyperbolic Method for Dispersive PDEs: Same High-Order of Accuracy for Solution, Gradient, and Hessian

    Science.gov (United States)

    Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki

    2016-01-01

    In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.

  19. Evaluation of Heart Rate Assessment Timing, Communication, Accuracy, and Clinical Decision-Making during High Fidelity Simulation of Neonatal Resuscitation

    Directory of Open Access Journals (Sweden)

    Win Boon

    2014-01-01

    Full Text Available Objective. Accurate heart rate (HR determination during neonatal resuscitation (NR informs subsequent NR actions. This study’s objective was to evaluate HR determination timeliness, communication, and accuracy during high fidelity NR simulations that house officers completed during neonatal intensive care unit (NICU rotations. Methods. In 2010, house officers in NICU rotations completed high fidelity NR simulation. We reviewed 80 house officers’ videotaped performance on their initial high fidelity simulation session, prior to training and performance debriefing. We calculated the proportion of cases congruent with NR guidelines, using chi square analysis to evaluate performance across HR ranges relevant to NR decision-making: <60, 60–99, and ≥100 beats per minute (bpm. Results. 87% used umbilical cord palpation, 57% initiated HR assessment within 30 seconds, 70% were accurate, and 74% were communicated appropriately. HR determination accuracy varied significantly across HR ranges, with 87%, 57%, and 68% for HR <60, 60–99, and ≥100 bpm, respectively (P<0.001. Conclusions. Timeliness, communication, and accuracy of house officers’ HR determination are suboptimal, particularly for HR 60–100 bpm, which might lead to inappropriate decision-making and NR care. Training implications include emphasizing more accurate HR determination methods, better communication, and improved HR interpretation during NR.

  20. UNSUPERVISED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGERY BY SELF-ORGANIZING NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    ÁRPÁD BARSI

    2010-06-01

    Full Text Available The current paper discusses the importance of the modern high resolution satellite imagery. The acquired high amount of data must be processed by an efficient way, where the used Kohonen-type self-organizing map has been proven as a suitable tool. The paper gives an introduction to this interesting method. The tests have shown that the multispectral image information can be taken after a resampling step as neural network inputs, and then the derived network weights are able to evaluate the whole image with acceptable thematic accuracy.