WorldWideScience

Sample records for high classification accuracies

  1. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  2. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  3. Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.

    Directory of Open Access Journals (Sweden)

    Andre F Marquand

    Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.

  4. Classification Accuracy Is Not Enough

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically...

  5. Classification Accuracy Increase Using Multisensor Data Fusion

    Science.gov (United States)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  6. Linear Discriminant Analysis achieves high classification accuracy for the BOLD fMRI response to naturalistic movie stimuli.

    Directory of Open Access Journals (Sweden)

    Hendrik eMandelkow

    2016-03-01

    Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  7. Strategies to Increase Accuracy in Text Classification

    NARCIS (Netherlands)

    D. Blommesteijn (Dennis)

    2014-01-01

    htmlabstractText classification via supervised learning involves various steps from processing raw data, features extraction to training and validating classifiers. Within these steps implementation decisions are critical to the resulting classifier accuracy. This paper contains a report of the

  8. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Directory of Open Access Journals (Sweden)

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  9. Can Automatic Classification Help to Increase Accuracy in Data Collection?

    Directory of Open Access Journals (Sweden)

    Frederique Lang

    2016-09-01

    Full Text Available Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets. Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM, Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms. Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%, which may reduce the manual work needed for classification tasks. Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers. Practical implications: Although the

  10. Radar target classification method with high accuracy and decision speed performance using MUSIC spectrum vectors and PCA projection

    Science.gov (United States)

    Secmen, Mustafa

    2011-10-01

    This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.

  11. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  12. Accuracy assessment between different image classification ...

    African Journals Online (AJOL)

    What image classification does is to assign pixel to a particular land cover and land use type that has the most similar spectral signature. However, there are possibilities that different methods or algorithms of image classification of the same data set could produce appreciable variant results in the sizes, shapes and areas of ...

  13. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S

    2011-01-01

    Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181

  14. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  15. Study on Classification Accuracy Inspection of Land Cover Data Aided by Automatic Image Change Detection Technology

    Science.gov (United States)

    Xie, W.-J.; Zhang, L.; Chen, H.-P.; Zhou, J.; Mao, W.-J.

    2018-04-01

    The purpose of carrying out national geographic conditions monitoring is to obtain information of surface changes caused by human social and economic activities, so that the geographic information can be used to offer better services for the government, enterprise and public. Land cover data contains detailed geographic conditions information, thus has been listed as one of the important achievements in the national geographic conditions monitoring project. At present, the main issue of the production of the land cover data is about how to improve the classification accuracy. For the land cover data quality inspection and acceptance, classification accuracy is also an important check point. So far, the classification accuracy inspection is mainly based on human-computer interaction or manual inspection in the project, which are time consuming and laborious. By harnessing the automatic high-resolution remote sensing image change detection technology based on the ERDAS IMAGINE platform, this paper carried out the classification accuracy inspection test of land cover data in the project, and presented a corresponding technical route, which includes data pre-processing, change detection, result output and information extraction. The result of the quality inspection test shows the effectiveness of the technical route, which can meet the inspection needs for the two typical errors, that is, missing and incorrect update error, and effectively reduces the work intensity of human-computer interaction inspection for quality inspectors, and also provides a technical reference for the data production and quality control of the land cover data.

  16. The Study of Land Use Classification Based on SPOT6 High Resolution Data

    OpenAIRE

    Wu Song; Jiang Qigang

    2016-01-01

    A method is carried out to quick classification extract of the type of land use in agricultural areas, which is based on the spot6 high resolution remote sensing classification data and used of the good nonlinear classification ability of support vector machine. The results show that the spot6 high resolution remote sensing classification data can realize land classification efficiently, the overall classification accuracy reached 88.79% and Kappa factor is 0.8632 which means that the classif...

  17. IMPACTS OF PATCH SIZE AND LANDSCAPE HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Impacts of Patch Size and Landscape Heterogeneity on Thematic Image Classification Accuracy. Currently, most thematic accuracy assessments of classified remotely sensed images oily account for errors between the various classes employed, at particular pixels of interest, thu...

  18. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    Science.gov (United States)

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  19. Convolutional Neural Network Achieves Human-level Accuracy in Music Genre Classification

    OpenAIRE

    Dong, Mingwen

    2018-01-01

    Music genre classification is one example of content-based analysis of music signals. Traditionally, human-engineered features were used to automatize this task and 61% accuracy has been achieved in the 10-genre classification. However, it's still below the 70% accuracy that humans could achieve in the same task. Here, we propose a new method that combines knowledge of human perception study in music genre classification and the neurophysiology of the auditory system. The method works by trai...

  20. High current high accuracy IGBT pulse generator

    International Nuclear Information System (INIS)

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 μF capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles

  1. ASSESSMENT OF LANDSCAPE CHARACTERISTICS ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of misclassifying pixels during thematic image classification. However, there has been a lack of empirical evidence, to support these hypotheses. This...

  2. A high accuracy land use/cover retrieval system

    Directory of Open Access Journals (Sweden)

    Alaa Hefnawy

    2012-03-01

    Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.

  3. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  4. The Classification of Romanian High-Schools

    Science.gov (United States)

    Ivan, Ion; Milodin, Daniel; Naie, Lucian

    2006-01-01

    The article tries to tackle the issue of high-schools classification from one city, district or from Romania. The classification criteria are presented. The National Database of Education is also presented and the application of criteria is illustrated. An algorithm for high-school multi-rang classification is proposed in order to build classes of…

  5. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    International Nuclear Information System (INIS)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A.; Caloba, L.P.; Mery, D.

    2004-01-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  6. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  7. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy

    Science.gov (United States)

    Peña, M. A.; Liao, R.; Brenning, A.

    2017-06-01

    This study assesses the potential of spectrotemporal indices derived from satellite image time series (SITS) to improve the classification accuracy of fruit-tree crops. Six major fruit-tree crop types in the Aconcagua Valley, Chile, were classified by applying various linear discriminant analysis (LDA) techniques on a Landsat-8 time series of nine images corresponding to the 2014-15 growing season. As features we not only used the complete spectral resolution of the SITS, but also all possible normalized difference indices (NDIs) that can be constructed from any two bands of the time series, a novel approach to derive features from SITS. Due to the high dimensionality of this "enhanced" feature set we used the lasso and ridge penalized variants of LDA (PLDA). Although classification accuracies yielded by the standard LDA applied on the full-band SITS were good (misclassification error rate, MER = 0.13), they were further improved by 23% (MER = 0.10) with ridge PLDA using the enhanced feature set. The most important bands to discriminate the crops of interest were mainly concentrated on the first two image dates of the time series, corresponding to the crops' greenup stage. Despite the high predictor weights provided by the red and near infrared bands, typically used to construct greenness spectral indices, other spectral regions were also found important for the discrimination, such as the shortwave infrared band at 2.11-2.19 μm, sensitive to foliar water changes. These findings support the usefulness of spectrotemporal indices in the context of SITS-based crop type classifications, which until now have been mainly constructed by the arithmetic combination of two bands of the same image date in order to derive greenness temporal profiles like those from the normalized difference vegetation index.

  8. Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery

    Directory of Open Access Journals (Sweden)

    Chinsu Lin

    2015-05-01

    Full Text Available Changes of Land Use and Land Cover (LULC affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification. Object-Based Support Vector Machine (OB-SVM and Pixel-Based Maximum Likelihood Classifier (PB-MLC were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldView-2 multispectral and panchromatic images.

  9. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  10. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  11. High accuracy FIONA-AFM hybrid imaging

    International Nuclear Information System (INIS)

    Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.

    2011-01-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  12. Conceptual Scoring and Classification Accuracy of Vocabulary Testing in Bilingual Children

    Science.gov (United States)

    Anaya, Jissel B.; Peña, Elizabeth D.; Bedore, Lisa M.

    2018-01-01

    Purpose: This study examined the effects of single-language and conceptual scoring on the vocabulary performance of bilingual children with and without specific language impairment. We assessed classification accuracy across 3 scoring methods. Method: Participants included Spanish-English bilingual children (N = 247) aged 5;1 (years;months) to…

  13. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    Science.gov (United States)

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  14. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Science.gov (United States)

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  15. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    Science.gov (United States)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  16. High accuracy 3-D laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...

  17. Impacts of land use/cover classification accuracy on regional climate simulations

    Science.gov (United States)

    Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.

    2007-03-01

    Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.

  18. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  19. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    Science.gov (United States)

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  20. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  1. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  2. Influence of different topographic correction strategies on mountain vegetation classification accuracy in the Lancang Watershed, China

    Science.gov (United States)

    Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun

    2011-01-01

    Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.

  3. An Object-Oriented Classification Method on High Resolution Satellite Data

    National Research Council Canada - National Science Library

    Xiaoxia, Sun; Jixian, Zhang; Zhengjun, Liu

    2004-01-01

    .... Thereby only the spectral information is used for the classification. High spatial resolution sensors involves a general increase of spatial information and the accuracy of results may decrease on a per-pixel basis...

  4. High accuracy satellite drag model (HASDM)

    Science.gov (United States)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  5. Fast and High Accuracy Wire Scanner

    CERN Document Server

    Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A

    2009-01-01

    Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...

  6. Weakly supervised classification in high energy physics

    International Nuclear Information System (INIS)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel

    2017-01-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  7. Weakly supervised classification in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Dery, Lucio Mwinmaarong [Physics Department, Stanford University,Stanford, CA, 94305 (United States); Nachman, Benjamin [Physics Division, Lawrence Berkeley National Laboratory,1 Cyclotron Rd, Berkeley, CA, 94720 (United States); Rubbo, Francesco; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA, 94025 (United States)

    2017-05-29

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  8. Accuracy of automated classification of major depressive disorder as a function of symptom severity.

    Science.gov (United States)

    Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell

    2016-01-01

    Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  9. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  10. Electron ray tracing with high accuracy

    International Nuclear Information System (INIS)

    Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.

    1986-01-01

    An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens

  11. Classification Accuracy of a Wearable Activity Tracker for Assessing Sedentary Behavior and Physical Activity in 3–5-Year-Old Children

    Directory of Open Access Journals (Sweden)

    Wonwoo Byun

    2018-03-01

    Full Text Available This study examined the accuracy of the Fitbit activity tracker (FF for quantifying sedentary behavior (SB and varying intensities of physical activity (PA in 3–5-year-old children. Twenty-eight healthy preschool-aged children (Girls: 46%, Mean age: 4.8 ± 1.0 years wore the FF and were directly observed while performing a set of various unstructured and structured free-living activities from sedentary to vigorous intensity. The classification accuracy of the FF for measuring SB, light PA (LPA, moderate-to-vigorous PA (MVPA, and total PA (TPA was examined calculating Pearson correlation coefficients (r, mean absolute percent error (MAPE, Cohen’s kappa (k, sensitivity (Se, specificity (Sp, and area under the receiver operating curve (ROC-AUC. The classification accuracies of the FF (ROC-AUC were 0.92, 0.63, 0.77 and 0.92 for SB, LPA, MVPA and TPA, respectively. Similarly, values of kappa, Se, Sp and percentage of correct classification were consistently high for SB and TPA, but low for LPA and MVPA. The FF demonstrated excellent classification accuracy for assessing SB and TPA, but lower accuracy for classifying LPA and MVPA. Our findings suggest that the FF should be considered as a valid instrument for assessing time spent sedentary and overall physical activity in preschool-aged children.

  12. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  13. Speed and accuracy of facial expression classification in avoidant personality disorder: a preliminary study.

    Science.gov (United States)

    Rosenthal, M Zachary; Kim, Kwanguk; Herr, Nathaniel R; Smoski, Moria J; Cheavens, Jennifer S; Lynch, Thomas R; Kosson, David S

    2011-10-01

    The aim of this preliminary study was to examine whether individuals with avoidant personality disorder (APD) could be characterized by deficits in the classification of dynamically presented facial emotional expressions. Using a community sample of adults with APD (n = 17) and non-APD controls (n = 16), speed and accuracy of facial emotional expression recognition was investigated in a task that morphs facial expressions from neutral to prototypical expressions (Multi-Morph Facial Affect Recognition Task; Blair, Colledge, Murray, & Mitchell, 2001). Results indicated that individuals with APD were significantly more likely than controls to make errors when classifying fully expressed fear. However, no differences were found between groups in the speed to correctly classify facial emotional expressions. The findings are some of the first to investigate facial emotional processing in a sample of individuals with APD and point to an underlying deficit in processing social cues that may be involved in the maintenance of APD.

  14. Classification of High Spatial Resolution, Hyperspectral ...

    Science.gov (United States)

    EPA announced the availability of the final report,Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA . This report and associated land use/land cover (LULC) coverage is the result of a collaborative effort among an interdisciplinary team of scientists with the U.S. Environmental Protection Agency's (U.S. EPA's) Office of Research and Development in Cincinnati, Ohio. A primary goal of this project is to enhance the use of geography and spatial analytic tools in risk assessment, and to improve the scientific basis for risk management decisions affecting drinking water and water quality. The land use/land cover classification is derived from 82 flight lines of Compact Airborne Spectrographic Imager (CASI) hyperspectral imagery acquired from July 24 through August 9, 2002 via fixed-wing aircraft.

  15. High-accuracy user identification using EEG biometrics.

    Science.gov (United States)

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  16. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  17. A COMPARISON OF HAZE REMOVAL ALGORITHMS AND THEIR IMPACTS ON CLASSIFICATION ACCURACY FOR LANDSAT IMAGERY

    Directory of Open Access Journals (Sweden)

    Yang Xiao

    Full Text Available The quality of Landsat images in humid areas is considerably degraded by haze in terms of their spectral response pattern, which limits the possibility of their application in using visible and near-infrared bands. A variety of haze removal algorithms have been proposed to correct these unsatisfactory illumination effects caused by the haze contamination. The purpose of this study was to illustrate the difference of two major algorithms (the improved homomorphic filtering (HF and the virtual cloud point (VCP for their effectiveness in solving spatially varying haze contamination, and to evaluate the impacts of haze removal on land cover classification. A case study with exploiting large quantities of Landsat TM images and climates (clear and haze in the most humid areas in China proved that these haze removal algorithms both perform well in processing Landsat images contaminated by haze. The outcome of the application of VCP appears to be more similar to the reference images compared to HF. Moreover, the Landsat image with VCP haze removal can improve the classification accuracy effectively in comparison to that without haze removal, especially in the cloudy contaminated area

  18. Accuracy of the all patient refined diagnosis related groups classification system in congenital heart surgery.

    Science.gov (United States)

    Parnell, Aimee S; Shults, Justine; Gaynor, J William; Leonard, Mary B; Dai, Dingwei; Feudtner, Chris

    2014-02-01

    Administrative data are increasingly used to evaluate clinical outcomes and quality of care in pediatric congenital heart surgery (CHS) programs. Several published analyses of large pediatric administrative data sets have relied on the All Patient Refined Diagnosis Related Groups (APR-DRG, version 24) diagnostic classification system. The accuracy of this classification system for patients undergoing CHS is unclear. We performed a retrospective cohort study of all 14,098 patients 0 to 5 years of age undergoing any of six selected congenital heart operations, ranging in complexity from isolated closure of a ventricular septal defect to single-ventricle palliation, at 40 tertiary-care pediatric centers in the Pediatric Health Information Systems database between 2007 and 2010. Assigned APR-DRGs (cardiac versus noncardiac) were compared using χ2 or Fisher's exact tests between those patients admitted during the first day of life versus later and between those receiving extracorporeal membrane oxygenation support versus those not. Recursive partitioning was used to assess the greatest determinants of APR-DRG type in the model. Every patient admitted on day 1 of life was assigned to a noncardiac APR-DRG (pDRG (pDRG experienced a significantly increased mortality (pDRG coding has systematic misclassifications, which may result in inaccurate reporting of CHS case volumes and mortality. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Comparison of accuracy of fibrosis degree classifications by liver biopsy and non-invasive tests in chronic hepatitis C.

    Science.gov (United States)

    Boursier, Jérôme; Bertrais, Sandrine; Oberti, Frédéric; Gallois, Yves; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Zarski, Jean-Pierre; Calès, Paul

    2011-11-30

    Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts) and blood tests, Metavir fibrosis (FM) stage accuracy was 64.4% in local pathologists vs. 82.2% (p blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55) and FibroMeter3G (0.14 ± 0.37, p blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%), Fibroscan: 64.9% (50.7%), FibroMeter2G: 68.7% (68.2%), FibroMeter3G: 77.1% (83.4%), p fibrosis classification of the best-performing blood test outperforms liver biopsy read by a local pathologist, i.e., in clinical practice; however, the classification precision is apparently lesser. This detailed classification accuracy is much lower than that of significant fibrosis with Fibroscan and even Fibrotest but higher with FibroMeter3G. FibroMeter classification accuracy was significantly higher than those of other non-invasive tests. Finally, for hepatitis C evaluation in clinical practice, fibrosis degree can be evaluated using an accurate blood test.

  20. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  1. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants.

    Science.gov (United States)

    Yousef, Malik; Saçar Demirci, Müşerref Duygu; Khalifa, Waleed; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection.

  2. Improving the Classification Accuracy for Near-Infrared Spectroscopy of Chinese Salvia miltiorrhiza Using Local Variable Selection

    Directory of Open Access Journals (Sweden)

    Lianqing Zhu

    2018-01-01

    Full Text Available In order to improve the classification accuracy of Chinese Salvia miltiorrhiza using near-infrared spectroscopy, a novel local variable selection strategy is thus proposed. Combining the strengths of the local algorithm and interval partial least squares, the spectra data have firstly been divided into several pairs of classes in sample direction and equidistant subintervals in variable direction. Then, a local classification model has been built, and the most proper spectral region has been selected based on the new evaluation criterion considering both classification error rate and best predictive ability under the leave-one-out cross validation scheme for each pair of classes. Finally, each observation can be assigned to belong to the class according to the statistical analysis of classification results of the local classification model built on selected variables. The performance of the proposed method was demonstrated through near-infrared spectra of cultivated or wild Salvia miltiorrhiza, which are collected from 8 geographical origins in 5 provinces of China. For comparison, soft independent modelling of class analogy and partial least squares discriminant analysis methods are, respectively, employed as the classification model. Experimental results showed that classification performance of the classification model with local variable selection was obvious better than that without variable selection.

  3. Improving mental task classification by adding high frequency band information.

    Science.gov (United States)

    Zhang, Li; He, Wei; He, Chuanhong; Wang, Ping

    2010-02-01

    Features extracted from delta, theta, alpha, beta and gamma bands spanning low frequency range are commonly used to classify scalp-recorded electroencephalogram (EEG) for designing brain-computer interface (BCI) and higher frequencies are often neglected as noise. In this paper, we implemented an experimental validation to demonstrate that high frequency components could provide helpful information for improving the performance of the mental task based BCI. Electromyography (EMG) and electrooculography (EOG) artifacts were removed by using blind source separation (BSS) techniques. Frequency band powers and asymmetry ratios from the high frequency band (40-100 Hz) together with those from the lower frequency bands were used to represent EEG features. Finally, Fisher discriminant analysis (FDA) combining with Mahalanobis distance were used as the classifier. In this study, four types of classifications were performed using EEG signals recorded from four subjects during five mental tasks. We obtained significantly higher classification accuracy by adding the high frequency band features compared to using the low frequency bands alone, which demonstrated that the information in high frequency components from scalp-recorded EEG is valuable for the mental task based BCI.

  4. Measurement Properties and Classification Accuracy of Two Spanish Parent Surveys of Language Development for Preschool-Age Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2010-01-01

    Purpose: To describe the concurrent validity and classification accuracy of 2 Spanish parent surveys of language development, the Spanish Ages and Stages Questionnaire (ASQ; Squires, Potter, & Bricker, 1999) and the Pilot Inventario-III (Pilot INV-III; Guiberson, 2008a). Method: Forty-eight Spanish-speaking parents of preschool-age children…

  5. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  6. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota

    Science.gov (United States)

    Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.

    2013-01-01

    Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.

  7. Accuracy Analysis Comparison of Supervised Classification Methods for Anomaly Detection on Levees Using SAR Imagery

    Directory of Open Access Journals (Sweden)

    Ramakalavathi Marapareddy

    2017-10-01

    Full Text Available This paper analyzes the use of a synthetic aperture radar (SAR imagery to support levee condition assessment by detecting potential slide areas in an efficient and cost-effective manner. Levees are prone to a failure in the form of internal erosion within the earthen structure and landslides (also called slough or slump slides. If not repaired, slough slides may lead to levee failures. In this paper, we compare the accuracy of the supervised classification methods minimum distance (MD using Euclidean and Mahalanobis distance, support vector machine (SVM, and maximum likelihood (ML, using SAR technology to detect slough slides on earthen levees. In this work, the effectiveness of the algorithms was demonstrated using quad-polarimetric L-band SAR imagery from the NASA Jet Propulsion Laboratory’s (JPL’s uninhabited aerial vehicle synthetic aperture radar (UAVSAR. The study area is a section of the lower Mississippi River valley in the Southern USA, where earthen flood control levees are maintained by the US Army Corps of Engineers.

  8. High accuracy in silico sulfotransferase models.

    Science.gov (United States)

    Cook, Ian; Wang, Ting; Falany, Charles N; Leyh, Thomas S

    2013-11-29

    Predicting enzymatic behavior in silico is an integral part of our efforts to understand biology. Hundreds of millions of compounds lie in targeted in silico libraries waiting for their metabolic potential to be discovered. In silico "enzymes" capable of accurately determining whether compounds can inhibit or react is often the missing piece in this endeavor. This problem has now been solved for the cytosolic sulfotransferases (SULTs). SULTs regulate the bioactivities of thousands of compounds--endogenous metabolites, drugs and other xenobiotics--by transferring the sulfuryl moiety (SO3) from 3'-phosphoadenosine 5'-phosphosulfate to the hydroxyls and primary amines of these acceptors. SULT1A1 and 2A1 catalyze the majority of sulfation that occurs during human Phase II metabolism. Here, recent insights into the structure and dynamics of SULT binding and reactivity are incorporated into in silico models of 1A1 and 2A1 that are used to identify substrates and inhibitors in a structurally diverse set of 1,455 high value compounds: the FDA-approved small molecule drugs. The SULT1A1 models predict 76 substrates. Of these, 53 were known substrates. Of the remaining 23, 21 were tested, and all were sulfated. The SULT2A1 models predict 22 substrates, 14 of which are known substrates. Of the remaining 8, 4 were tested, and all are substrates. The models proved to be 100% accurate in identifying substrates and made no false predictions at Kd thresholds of 100 μM. In total, 23 "new" drug substrates were identified, and new linkages to drug inhibitors are predicted. It now appears to be possible to accurately predict Phase II sulfonation in silico.

  9. Increasing accuracy of vehicle detection from conventional vehicle detectors - counts, speeds, classification, and travel time.

    Science.gov (United States)

    2014-09-01

    Vehicle classification is an important traffic parameter for transportation planning and infrastructure : management. Length-based vehicle classification from dual loop detectors is among the lowest cost : technologies commonly used for collecting th...

  10. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  11. Diagnostic performance of whole brain volume perfusion CT in intra-axial brain tumors: Preoperative classification accuracy and histopathologic correlation

    International Nuclear Information System (INIS)

    Xyda, Argyro; Haberland, Ulrike; Klotz, Ernst; Jung, Klaus; Bock, Hans Christoph; Schramm, Ramona; Knauth, Michael; Schramm, Peter

    2012-01-01

    Background: To evaluate the preoperative diagnostic power and classification accuracy of perfusion parameters derived from whole brain volume perfusion CT (VPCT) in patients with cerebral tumors. Methods: Sixty-three patients (31 male, 32 female; mean age 55.6 ± 13.9 years), with MRI findings suspected of cerebral lesions, underwent VPCT. Two readers independently evaluated VPCT data. Volumes of interest (VOIs) were marked circumscript around the tumor according to maximum intensity projection volumes, and then mapped automatically onto the cerebral blood volume (CBV), flow (CBF) and permeability Ktrans perfusion datasets. A second VOI was placed in the contra lateral cortex, as control. Correlations among perfusion values, tumor grade, cerebral hemisphere and VOIs were evaluated. Moreover, the diagnostic power of VPCT parameters, by means of positive and negative predictive value, was analyzed. Results: Our cohort included 32 high-grade gliomas WHO III/IV, 18 low-grade I/II, 6 primary cerebral lymphomas, 4 metastases and 3 tumor-like lesions. Ktrans demonstrated the highest sensitivity, specificity and positive predictive value, with a cut-off point of 2.21 mL/100 mL/min, for both the comparisons between high-grade versus low-grade and low-grade versus primary cerebral lymphomas. However, for the differentiation between high-grade and primary cerebral lymphomas, CBF and CBV proved to have 100% specificity and 100% positive predictive value, identifying preoperatively all the histopathologically proven high-grade gliomas. Conclusion: Volumetric perfusion data enable the hemodynamic assessment of the entire tumor extent and provide a method of preoperative differentiation among intra-axial cerebral tumors with promising diagnostic accuracy.

  12. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  13. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  14. Bagging Approach for Increasing Classification Accuracy of CART on Family Participation Prediction in Implementation of Elderly Family Development Program

    Directory of Open Access Journals (Sweden)

    Wisoedhanie Widi Anugrahanti

    2017-06-01

    Full Text Available Classification and Regression Tree (CART was a method of Machine Learning where data exploration was done by decision tree technique. CART was a classification technique with binary recursive reconciliation algorithms where the sorting was performed on a group of data collected in a space called a node / node into two child nodes (Lewis, 2000. The aim of this study was to predict family participation in Elderly Family Development program based on family behavior in providing physical, mental, social care for the elderly. Family involvement accuracy using Bagging CART method was calculated based on 1-APER value, sensitivity, specificity, and G-Means. Based on CART method, classification accuracy was obtained 97,41% with Apparent Error Rate value 2,59%. The most important determinant of family behavior as a sorter was society participation (100,00000, medical examination (98,95988, providing nutritious food (68.60476, establishing communication (67,19877 and worship (57,36587. To improved the stability and accuracy of CART prediction, used CART Bootstrap Aggregating (Bagging with 100% accuracy result. Bagging CART classifies a total of 590 families (84.77% were appropriately classified into implement elderly Family Development program class.

  15. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  16. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    Science.gov (United States)

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  17. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  18. Land cover classification accuracy from electro-optical, X, C, and L-band Synthetic Aperture Radar data fusion

    Science.gov (United States)

    Hammann, Mark Gregory

    The fusion of electro-optical (EO) multi-spectral satellite imagery with Synthetic Aperture Radar (SAR) data was explored with the working hypothesis that the addition of multi-band SAR will increase the land-cover (LC) classification accuracy compared to EO alone. Three satellite sources for SAR imagery were used: X-band from TerraSAR-X, C-band from RADARSAT-2, and L-band from PALSAR. Images from the RapidEye satellites were the source of the EO imagery. Imagery from the GeoEye-1 and WorldView-2 satellites aided the selection of ground truth. Three study areas were chosen: Wad Medani, Sudan; Campinas, Brazil; and Fresno- Kings Counties, USA. EO imagery were radiometrically calibrated, atmospherically compensated, orthorectifed, co-registered, and clipped to a common area of interest (AOI). SAR imagery were radiometrically calibrated, and geometrically corrected for terrain and incidence angle by converting to ground range and Sigma Naught (?0). The original SAR HH data were included in the fused image stack after despeckling with a 3x3 Enhanced Lee filter. The variance and Gray-Level-Co-occurrence Matrix (GLCM) texture measures of contrast, entropy, and correlation were derived from the non-despeckled SAR HH bands. Data fusion was done with layer stacking and all data were resampled to a common spatial resolution. The Support Vector Machine (SVM) decision rule was used for the supervised classifications. Similar LC classes were identified and tested for each study area. For Wad Medani, nine classes were tested: low and medium intensity urban, sparse forest, water, barren ground, and four agriculture classes (fallow, bare agricultural ground, green crops, and orchards). For Campinas, Brazil, five generic classes were tested: urban, agriculture, forest, water, and barren ground. For the Fresno-Kings Counties location 11 classes were studied: three generic classes (urban, water, barren land), and eight specific crops. In all cases the addition of SAR to EO resulted

  19. Comparison of accuracy of fibrosis degree classifications by liver biopsy and non-invasive tests in chronic hepatitis C

    Directory of Open Access Journals (Sweden)

    Boursier Jérôme

    2011-11-01

    Full Text Available Abstract Background Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Methods Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. Results In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts and blood tests, Metavir fibrosis (FM stage accuracy was 64.4% in local pathologists vs. 82.2% (p -3 in single expert pathologist. Significant discrepancy (≥ 2FM vs reference histological result rates were: Fibrotest: 17.2%, FibroMeter2G: 5.6%, local pathologists: 4.9%, FibroMeter3G: 0.5%, expert pathologist: 0% (p -3. In population #2 including 1,056 patients and comparing blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55 and FibroMeter3G (0.14 ± 0.37, p -3 or Fibrotest (0.84 ± 0.80, p -3. In population #3 (and #4 including 458 (359 patients and comparing blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%, Fibroscan: 64.9% (50.7%, FibroMeter2G: 68.7% (68.2%, FibroMeter3G: 77.1% (83.4%, p -3 (p -3. Significant discrepancy (≥ 2 FM rates were, respectively: Fibrotest: 21.3% (22.2%, Fibroscan: 12.9% (12.3%, FibroMeter2G: 5.7% (6

  20. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    OpenAIRE

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spec...

  1. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  2. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  3. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  4. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  5. Testing the Potential of Vegetation Indices for Land Use/cover Classification Using High Resolution Data

    Science.gov (United States)

    Karakacan Kuzucu, A.; Bektas Balcik, F.

    2017-11-01

    Accurate and reliable land use/land cover (LULC) information obtained by remote sensing technology is necessary in many applications such as environmental monitoring, agricultural management, urban planning, hydrological applications, soil management, vegetation condition study and suitability analysis. But this information still remains a challenge especially in heterogeneous landscapes covering urban and rural areas due to spectrally similar LULC features. In parallel with technological developments, supplementary data such as satellite-derived spectral indices have begun to be used as additional bands in classification to produce data with high accuracy. The aim of this research is to test the potential of spectral vegetation indices combination with supervised classification methods and to extract reliable LULC information from SPOT 7 multispectral imagery. The Normalized Difference Vegetation Index (NDVI), the Ratio Vegetation Index (RATIO), the Soil Adjusted Vegetation Index (SAVI) were the three vegetation indices used in this study. The classical maximum likelihood classifier (MLC) and support vector machine (SVM) algorithm were applied to classify SPOT 7 image. Catalca is selected region located in the north west of the Istanbul in Turkey, which has complex landscape covering artificial surface, forest and natural area, agricultural field, quarry/mining area, pasture/scrubland and water body. Accuracy assessment of all classified images was performed through overall accuracy and kappa coefficient. The results indicated that the incorporation of these three different vegetation indices decrease the classification accuracy for the MLC and SVM classification. In addition, the maximum likelihood classification slightly outperformed the support vector machine classification approach in both overall accuracy and kappa statistics.

  6. APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN CLASSIFICATION OF HIGH RESOLUTION AGRICULTURAL REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available With the rapid development of Precision Agriculture (PA promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN. For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  7. Classification and Accuracy Assessment for Coarse Resolution Mapping within the Great Lakes Basin, USA

    Science.gov (United States)

    This study applied a phenology-based land-cover classification approach across the Laurentian Great Lakes Basin (GLB) using time-series data consisting of 23 Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) composite images (250 ...

  8. IMPACTS OF PATCH SIZE AND LAND COVER HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    Science.gov (United States)

    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of miss-classifying pixels during thematic image classification. However, there has been a lack of empirical evidence to support these hypotheses,...

  9. Accuracy of automated classification of major depressive disorder as a function of symptom severity

    Directory of Open Access Journals (Sweden)

    Rajamannar Ramasubbu, MD, FRCPC, MSc

    2016-01-01

    Conclusions: Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  10. Impact of geometry and viewing angle on classification accuracy of 2D based analysis of dysmorphic faces.

    Science.gov (United States)

    Vollmar, Tobias; Maus, Baerbel; Wurtz, Rolf P; Gillessen-Kaesbach, Gabriele; Horsthemke, Bernhard; Wieczorek, Dagmar; Boehringer, Stefan

    2008-01-01

    Digital image analysis of faces has been demonstrated to be effective in a small number of syndromes. In this paper we investigate several aspects that help bringing these methods closer to clinical application. First, we investigate the impact of increasing the number of syndromes from 10 to 14 as compared to an earlier study. Second, we include a side-view pose into the analysis and third, we scrutinize the effect of geometry information. Picture analysis uses a Gabor wavelet transform, standardization of landmark coordinates and subsequent statistical analysis. We can demonstrate that classification accuracy drops from 76% for 10 syndromes to 70% for 14 syndromes for frontal images. Including side-views achieves an accuracy of 76% again. Geometry performs excellently with 85% for combined poses. Combination of wavelets and geometry for both poses increases accuracy to 93%. In conclusion, a larger number of syndromes can be handled effectively by means of image analysis.

  11. High accuracy wavelength calibration for a scanning visible spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  12. Texture classification of vegetation cover in high altitude wetlands zone

    International Nuclear Information System (INIS)

    Wentao, Zou; Bingfang, Wu; Hongbo, Ju; Hua, Liu

    2014-01-01

    The aim of this study was to investigate the utility of datasets composed of texture measures and other features for the classification of vegetation cover, specifically wetlands. QUEST decision tree classifier was applied to a SPOT-5 image sub-scene covering the typical wetlands area in Three River Sources region in Qinghai province, China. The dataset used for the classification comprised of: (1) spectral data and the components of principal component analysis; (2) texture measures derived from pixel basis; (3) DEM and other ancillary data covering the research area. Image textures is an important characteristic of remote sensing images; it can represent spatial variations with spectral brightness in digital numbers. When the spectral information is not enough to separate the different land covers, the texture information can be used to increase the classification accuracy. The texture measures used in this study were calculated from GLCM (Gray level Co-occurrence Matrix); eight frequently used measures were chosen to conduct the classification procedure. The results showed that variance, mean and entropy calculated by GLCM with a 9*9 size window were effective in distinguishing different vegetation types in wetlands zone. The overall accuracy of this method was 84.19% and the Kappa coefficient was 0.8261. The result indicated that the introduction of texture measures can improve the overall accuracy by 12.05% and the overall kappa coefficient by 0.1407 compared with the result using spectral and ancillary data

  13. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  15. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  16. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  17. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  18. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning.

    Science.gov (United States)

    Pärnamaa, Tanel; Parts, Leopold

    2017-05-05

    High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy. Copyright © 2017 Parnamaa and Parts.

  19. The impact of catchment source group classification on the accuracy of sediment fingerprinting outputs.

    Science.gov (United States)

    Pulley, Simon; Foster, Ian; Collins, Adrian L

    2017-06-01

    The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis

  20. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    Science.gov (United States)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  1. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  2. Improving ECG classification accuracy using an ensemble of neural network modules.

    Directory of Open Access Journals (Sweden)

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  3. High-accuracy measurements of the normal specular reflectance

    International Nuclear Information System (INIS)

    Voarino, Philippe; Piombini, Herve; Sabary, Frederic; Marteau, Daniel; Dubard, Jimmy; Hameury, Jacques; Filtz, Jean Remy

    2008-01-01

    The French Laser Megajoule (LMJ) is designed and constructed by the French Commissariata l'Energie Atomique (CEA). Its amplifying section needs highly reflective multilayer mirrors for the flash lamps. To monitor and improve the coating process, the reflectors have to be characterized to high accuracy. The described spectrophotometer is designed to measure normal specular reflectance with high repeatability by using a small spot size of 100 μm. Results are compared with ellipsometric measurements. The instrument can also perform spatial characterization to detect coating nonuniformity

  4. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1996-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  5. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, Eric M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  6. Why is a high accuracy needed in dosimetry

    International Nuclear Information System (INIS)

    Lanzl, L.H.

    1976-01-01

    Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of γ and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control

  7. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Directory of Open Access Journals (Sweden)

    Yunlong Yu

    2018-01-01

    Full Text Available One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  8. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    Directory of Open Access Journals (Sweden)

    Guizhou Wang

    2013-01-01

    Full Text Available This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine. Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy.

  9. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Science.gov (United States)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  10. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  11. Achieving High Accuracy in Calculations of NMR Parameters

    DEFF Research Database (Denmark)

    Faber, Rasmus

    quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...

  12. Two high accuracy digital integrators for Rogowski current transducers

    Science.gov (United States)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  13. High Accuracy Piezoelectric Kinemometer; Cinemometro piezoelectrico de alta exactitud (VUAE)

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez Martinez, F. J.; Frutos, J. de; Pastor, C.; Vazquez Rodriguez, M.

    2012-07-01

    We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n=2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables. (Author) 11 refs.

  14. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  15. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics

  16. High-accuracy mass spectrometry for fundamental studies.

    Science.gov (United States)

    Kluge, H-Jürgen

    2010-01-01

    Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.

  17. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    Science.gov (United States)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  18. Read-only high accuracy volume holographic optical correlator

    Science.gov (United States)

    Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.

  19. Scale Issues Related to the Accuracy Assessment of Land Use/Land Cover Maps Produced Using Multi-Resolution Data: Comments on “The Improvement of Land Cover Classification by Thermal Remote Sensing”. Remote Sens. 2015, 7(7, 8368–8390

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2015-10-01

    Full Text Available Much remote sensing (RS research focuses on fusing, i.e., combining, multi-resolution/multi-sensor imagery for land use/land cover (LULC classification. In relation to this topic, Sun and Schulz [1] recently found that a combination of visible-to-near infrared (VNIR; 30 m spatial resolution and thermal infrared (TIR; 100–120 m spatial resolution Landsat data led to more accurate LULC classification. They also found that using multi-temporal TIR data alone for classification resulted in comparable (and in some cases higher classification accuracies to the use of multi-temporal VNIR data, which contrasts with the findings of other recent research [2]. This discrepancy, and the generally very high LULC accuracies achieved by Sun and Schulz (up to 99.2% overall accuracy for a combined VNIR/TIR classification result, can likely be explained by their use of an accuracy assessment procedure which does not take into account the multi-resolution nature of the data. Sun and Schulz used 10-fold cross-validation for accuracy assessment, which is not necessarily inappropriate for RS accuracy assessment in general. However, here it is shown that the typical pixel-based cross-validation approach results in non-independent training and validation data sets when the lower spatial resolution TIR images are used for classification, which causes classification accuracy to be overestimated.

  20. Basic visual dysfunction allows classification of patients with schizophrenia with exceptional accuracy.

    Science.gov (United States)

    González-Hernández, J A; Pita-Alcorta, C; Padrón, A; Finalé, A; Galán, L; Martínez, E; Díaz-Comas, L; Samper-González, J A; Lencer, R; Marot, M

    2014-10-01

    Basic visual dysfunctions are commonly reported in schizophrenia; however their value as diagnostic tools remains uncertain. This study reports a novel electrophysiological approach using checkerboard visual evoked potentials (VEP). Sources of spectral resolution VEP-components C1, P1 and N1 were estimated by LORETA, and the band-effects (BSE) on these estimated sources were explored in each subject. BSEs were Z-transformed for each component and relationships with clinical variables were assessed. Clinical effects were evaluated by ROC-curves and predictive values. Forty-eight patients with schizophrenia (SZ) and 55 healthy controls participated in the study. For each of the 48 patients, the three VEP components were localized to both dorsal and ventral brain areas and also deviated from a normal distribution. P1 and N1 deviations were independent of treatment, illness chronicity or gender. Results from LORETA also suggest that deficits in thalamus, posterior cingulum, precuneus, superior parietal and medial occipitotemporal areas were associated with symptom severity. While positive symptoms were more strongly related to sensory processing deficits (P1), negative symptoms were more strongly related to perceptual processing dysfunction (N1). Clinical validation revealed positive and negative predictive values for correctly classifying SZ of 100% and 77%, respectively. Classification in an additional independent sample of 30 SZ corroborated these results. In summary, this novel approach revealed basic visual dysfunctions in all patients with schizophrenia, suggesting these visual dysfunctions represent a promising candidate as a biomarker for schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  2. Fast Segmentation and Classification of Very High Resolution Remote Sensing Data Using SLIC Superpixels

    Directory of Open Access Journals (Sweden)

    Ovidiu Csillik

    2017-03-01

    Full Text Available Speed and accuracy are important factors when dealing with time-constraint events for disaster, risk, and crisis-management support. Object-based image analysis can be a time consuming task in extracting information from large images because most of the segmentation algorithms use the pixel-grid for the initial object representation. It would be more natural and efficient to work with perceptually meaningful entities that are derived from pixels using a low-level grouping process (superpixels. Firstly, we tested a new workflow for image segmentation of remote sensing data, starting the multiresolution segmentation (MRS, using ESP2 tool from the superpixel level and aiming at reducing the amount of time needed to automatically partition relatively large datasets of very high resolution remote sensing data. Secondly, we examined whether a Random Forest classification based on an oversegmentation produced by a Simple Linear Iterative Clustering (SLIC superpixel algorithm performs similarly with reference to a traditional object-based classification regarding accuracy. Tests were applied on QuickBird and WorldView-2 data with different extents, scene content complexities, and number of bands to assess how the computational time and classification accuracy are affected by these factors. The proposed segmentation approach is compared with the traditional one, starting the MRS from the pixel level, regarding geometric accuracy of the objects and the computational time. The computational time was reduced in all cases, the biggest improvement being from 5 h 35 min to 13 min, for a WorldView-2 scene with eight bands and an extent of 12.2 million pixels, while the geometric accuracy is kept similar or slightly better. SLIC superpixel-based classification had similar or better overall accuracy values when compared to MRS-based classification, but the results were obtained in a fast manner and avoiding the parameterization of the MRS. These two approaches

  3. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  4. Synchrotron accelerator technology for proton beam therapy with high accuracy

    International Nuclear Information System (INIS)

    Hiramoto, Kazuo

    2009-01-01

    Proton beam therapy was applied at the beginning to head and neck cancers, but it is now extended to prostate, lung and liver cancers. Thus the need for a pencil beam scanning method is increasing. With this method radiation dose concentration property of the proton beam will be further intensified. Hitachi group has supplied a pencil beam scanning therapy system as the first one for M. D. Anderson Hospital in United States, and it has been operational since May 2008. Hitachi group has been developing proton therapy system to correspond high-accuracy proton therapy to concentrate the dose in the diseased part which is located with various depths, and which sometimes has complicated shape. The author described here on the synchrotron accelerator technology that is an important element for constituting the proton therapy system. (K.Y.)

  5. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  6. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  7. Object-Based Classification of Grasslands from High Resolution Satellite Image Time Series Using Gaussian Mean Map Kernels

    Directory of Open Access Journals (Sweden)

    Mailys Lopes

    2017-07-01

    Full Text Available This paper deals with the classification of grasslands using high resolution satellite image time series. Grasslands considered in this work are semi-natural elements in fragmented landscapes, i.e., they are heterogeneous and small elements. The first contribution of this study is to account for grassland heterogeneity while working at the object level by modeling its pixels distributions by a Gaussian distribution. To measure the similarity between two grasslands, a new kernel is proposed as a second contribution: the α -Gaussian mean kernel. It allows one to weight the influence of the covariance matrix when comparing two Gaussian distributions. This kernel is introduced in support vector machines for the supervised classification of grasslands from southwest France. A dense intra-annual multispectral time series of the Formosat-2 satellite is used for the classification of grasslands’ management practices, while an inter-annual NDVI time series of Formosat-2 is used for old and young grasslands’ discrimination. Results are compared to other existing pixel- and object-based approaches in terms of classification accuracy and processing time. The proposed method is shown to be a good compromise between processing speed and classification accuracy. It can adapt to the classification constraints, and it encompasses several similarity measures known in the literature. It is appropriate for the classification of small and heterogeneous objects such as grasslands.

  8. High accuracy magnetic field mapping of the LEP spectrometer magnet

    CERN Document Server

    Roncarolo, F

    2000-01-01

    The Large Electron Positron accelerator (LEP) is a storage ring which has been operated since 1989 at the European Laboratory for Particle Physics (CERN), located in the Geneva area. It is intended to experimentally verify the Standard Model theory and in particular to detect with high accuracy the mass of the electro-weak force bosons. Electrons and positrons are accelerated inside the LEP ring in opposite directions and forced to collide at four locations, once they reach an energy high enough for the experimental purposes. During head-to-head collisions the leptons loose all their energy and a huge amount of energy is concentrated in a small region. In this condition the energy is quickly converted in other particles which tend to go away from the interaction point. The higher the energy of the leptons before the collisions, the higher the mass of the particles that can escape. At LEP four large experimental detectors are accommodated. All detectors are multi purpose detectors covering a solid angle of alm...

  9. Object-based vegetation classification with high resolution remote sensing imagery

    Science.gov (United States)

    Yu, Qian

    Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions

  10. Use of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) for improving the accuracy of the risk classification of type 1 diabetes.

    Science.gov (United States)

    Sosenko, Jay M; Skyler, Jay S; Mahon, Jeffrey; Krischer, Jeffrey P; Greenbaum, Carla J; Rafkin, Lisa E; Beam, Craig A; Boulware, David C; Matheson, Della; Cuthbertson, David; Herold, Kevan C; Eisenbarth, George; Palmer, Jerry P

    2014-04-01

    OBJECTIVE We studied the utility of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) for improving the accuracy of type 1 diabetes (T1D) risk classification in TrialNet Natural History Study (TNNHS) participants. RESEARCH DESIGN AND METHODS The cumulative incidence of T1D was compared between normoglycemic individuals with DPTRS values >7.00 and dysglycemic individuals in the TNNHS (n = 991). It was also compared between individuals with DPTRS values 7.00 among those with dysglycemia and those with multiple autoantibodies in the TNNHS. DPTRS values >7.00 were compared with dysglycemia for characterizing risk in Diabetes Prevention Trial-Type 1 (DPT-1) (n = 670) and TNNHS participants. The reliability of DPTRS values >7.00 was compared with dysglycemia in the TNNHS. RESULTS The cumulative incidence of T1D for normoglycemic TNNHS participants with DPTRS values >7.00 was comparable to those with dysglycemia. Among those with dysglycemia, the cumulative incidence was much higher (P 7.00 than for those with values 7.00). Dysglycemic individuals in DPT-1 were at much higher risk for T1D than those with dysglycemia in the TNNHS (P 7.00. The proportion in the TNNHS reverting from dysglycemia to normoglycemia at the next visit was higher than the proportion reverting from DPTRS values >7.00 to values <7.00 (36 vs. 23%). CONCLUSIONS DPTRS thresholds can improve T1D risk classification accuracy by identifying high-risk normoglycemic and low-risk dysglycemic individuals. The 7.00 DPTRS threshold characterizes risk more consistently between populations and has greater reliability than dysglycemia.

  11. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  12. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    Science.gov (United States)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  13. Improved Wetland Classification Using Eight-Band High Resolution Satellite Imagery and a Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Charles R. Lane

    2014-12-01

    Full Text Available Although remote sensing technology has long been used in wetland inventory and monitoring, the accuracy and detail level of wetland maps derived with moderate resolution imagery and traditional techniques have been limited and often unsatisfactory. We explored and evaluated the utility of a newly launched high-resolution, eight-band satellite system (Worldview-2; WV2 for identifying and classifying freshwater deltaic wetland vegetation and aquatic habitats in the Selenga River Delta of Lake Baikal, Russia, using a hybrid approach and a novel application of Indicator Species Analysis (ISA. We achieved an overall classification accuracy of 86.5% (Kappa coefficient: 0.85 for 22 classes of aquatic and wetland habitats and found that additional metrics, such as the Normalized Difference Vegetation Index and image texture, were valuable for improving the overall classification accuracy and particularly for discriminating among certain habitat classes. Our analysis demonstrated that including WV2’s four spectral bands from parts of the spectrum less commonly used in remote sensing analyses, along with the more traditional bandwidths, contributed to the increase in the overall classification accuracy by ~4% overall, but with considerable increases in our ability to discriminate certain communities. The coastal band improved differentiating open water and aquatic (i.e., vegetated habitats, and the yellow, red-edge, and near-infrared 2 bands improved discrimination among different vegetated aquatic and terrestrial habitats. The use of ISA provided statistical rigor in developing associations between spectral classes and field-based data. Our analyses demonstrated the utility of a hybrid approach and the benefit of additional bands and metrics in providing the first spatially explicit mapping of a large and heterogeneous wetland system.

  14. Accuracy assessment of high-rate GPS measurements for seismology

    Science.gov (United States)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  15. Accuracy assessment of cadastral maps using high resolution aerial photos

    Directory of Open Access Journals (Sweden)

    Alwan Imzahim

    2018-01-01

    Full Text Available A cadastral map is a map that shows the boundaries and ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. In Iraq / Baghdad Governorate, the main problem is that the cadastral maps are georeferenced to a local geodetic datum known as Clark 1880 while the widely used reference system for navigation purpose (GPS and GNSS and uses Word Geodetic System 1984 (WGS84 as a base reference datum. The objective of this paper is to produce a cadastral map with scale 1:500 (metric scale by using aerial photographs 2009 with high ground spatial resolution 10 cm reference WGS84 system. The accuracy assessment for the cadastral maps updating approach to urban large scale cadastral maps (1:500-1:1000 was ± 0.115 meters; which complies with the American Social for Photogrammetry and Remote Sensing Standards (ASPRS.

  16. Determination of UAV position using high accuracy navigation platform

    Directory of Open Access Journals (Sweden)

    Ireneusz Kubicki

    2016-07-01

    Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration

  17. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    Science.gov (United States)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  18. CLASSIFICATION OF LIDAR DATA FOR GENERATING A HIGH-PRECISION ROADWAY MAP

    Directory of Open Access Journals (Sweden)

    J. Jeong

    2016-06-01

    Full Text Available Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  19. Modified sine bar device measures small angles with high accuracy

    Science.gov (United States)

    Thekaekara, M.

    1968-01-01

    Modified sine bar device measures small angles with enough accuracy to calibrate precision optical autocollimators. The sine bar is a massive bar of steel supported by two cylindrical rods at one end and one at the other.

  20. The Effects of Point or Polygon Based Training Data on RandomForest Classification Accuracy of Wetlands

    Directory of Open Access Journals (Sweden)

    Jennifer Corcoran

    2015-04-01

    Full Text Available Wetlands are dynamic in space and time, providing varying ecosystem services. Field reference data for both training and assessment of wetland inventories in the State of Minnesota are typically collected as GPS points over wide geographical areas and at infrequent intervals. This status-quo makes it difficult to keep updated maps of wetlands with adequate accuracy, efficiency, and consistency to monitor change. Furthermore, point reference data may not be representative of the prevailing land cover type for an area, due to point location or heterogeneity within the ecosystem of interest. In this research, we present techniques for training a land cover classification for two study sites in different ecoregions by implementing the RandomForest classifier in three ways: (1 field and photo interpreted points; (2 fixed window surrounding the points; and (3 image objects that intersect the points. Additional assessments are made to identify the key input variables. We conclude that the image object area training method is the most accurate and the most important variables include: compound topographic index, summer season green and blue bands, and grid statistics from LiDAR point cloud data, especially those that relate to the height of the return.

  1. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Science.gov (United States)

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  2. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Directory of Open Access Journals (Sweden)

    Xiangbin Zhu

    Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  3. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  4. Combined Scintigraphy and Tumor Marker Analysis Predicts Unfavorable Histopathology of Neuroblastic Tumors with High Accuracy.

    Directory of Open Access Journals (Sweden)

    Wolfgang Peter Fendler

    Full Text Available Our aim was to improve the prediction of unfavorable histopathology (UH in neuroblastic tumors through combined imaging and biochemical parameters.123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR, MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction.34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC. TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001. Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001. Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001.Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH.

  5. High precision frequency estimation for harpsichord tuning classification

    OpenAIRE

    Tidhar, D.; Mauch, M.; Dixon, S.

    2010-01-01

    We present a novel music signal processing task of classifying the tuning of a harpsichord from audio recordings of standard musical works. We report the results of a classification experiment involving six different temperaments, using real harpsichord recordings as well as synthesised audio data. We introduce the concept of conservative transcription, and show that existing high-precision pitch estimation techniques are sufficient for our task if combined with conservative transcription. In...

  6. River floodplain vegetation classification using multi-temporal high-resolution colour infrared UAV imagery.

    NARCIS (Netherlands)

    van Iersel, W.K.; Straatsma, M.W.; Addink, E.A.; Middelkoop, H.

    2016-01-01

    To evaluate floodplain functioning, monitoring of its vegetation is essential. Although airborne imagery is widely applied for this purpose, classification accuracy (CA) remains low for grassland (< 88%) and herbaceous vegetation (<57%) due to the spectral and structural similarity of these

  7. The edge-preservation multi-classifier relearning framework for the classification of high-resolution remotely sensed imagery

    Science.gov (United States)

    Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya

    2018-04-01

    In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.

  8. Measurement system with high accuracy for laser beam quality.

    Science.gov (United States)

    Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming

    2015-05-20

    Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.

  9. Supervised Classification High-Resolution Remote-Sensing Image Based on Interval Type-2 Fuzzy Membership Function

    Directory of Open Access Journals (Sweden)

    Chunyan Wang

    2018-05-01

    Full Text Available Because of the degradation of classification accuracy that is caused by the uncertainty of pixel class and classification decisions of high-resolution remote-sensing images, we proposed a supervised classification method that is based on an interval type-2 fuzzy membership function for high-resolution remote-sensing images. We analyze the data features of a high-resolution remote-sensing image and construct a type-1 membership function model in a homogenous region by supervised sampling in order to characterize the uncertainty of the pixel class. On the basis of the fuzzy membership function model in the homogeneous region and in accordance with the 3σ criterion of normal distribution, we proposed a method for modeling three types of interval type-2 membership functions and analyze the different types of functions to improve the uncertainty of pixel class expressed by the type-1 fuzzy membership function and to enhance the accuracy of classification decision. According to the principle that importance will increase with a decrease in the distance between the original, upper, and lower fuzzy membership of the training data and the corresponding frequency value in the histogram, we use the weighted average sum of three types of fuzzy membership as the new fuzzy membership of the pixel to be classified and then integrated into the neighborhood pixel relations, constructing a classification decision model. We use the proposed method to classify real high-resolution remote-sensing images and synthetic images. Additionally, we qualitatively and quantitatively evaluate the test results. The results show that a higher classification accuracy can be achieved with the proposed algorithm.

  10. Classification of High-Mountain Vegetation Communities within a Diverse Giant Mountains Ecosystem Using Airborne APEX Hyperspectral Imagery

    Directory of Open Access Journals (Sweden)

    Adriana Marcinkowska-Ochtyra

    2018-04-01

    Full Text Available Mapping plant communities is a difficult and time consuming endeavor. Methods relying on field surveys deliver high quality data but are usually limited to relatively small areas. In this paper we apply airborne hyperspectral data to vegetation mapping in remote and hard to reach areas. We classified 22 vegetation communities in the Giant Mountains on 3.12-m Airborne Prism Experiment (APEX hyperspectral images, registered in 288 spectral bands (10 September 2012. As the classification algorithm, Support Vector Machines (SVM was used. APEX data were corrected geometrically and atmospherically, and three dimensionality reduction methods were performed to select the best dataset. As reference we used a non-forest vegetation map containing vegetation communities of Polish Karkonosze National Park from 2002, orthophotomap and field surveys data from 2013 to 2014. We obtained the post-classification maps of 22 vegetation communities, lakes and areas without any vegetation. Iterative accuracy assessment repeated 100 times was used to obtain the most objective results for individual communities. The median value of overall accuracy (OA was 84%. Fourteen out of twenty-four classes were classified of more than 80% of producer accuracy (PA and sixteen out of twenty-four of user accuracy (UA. APEX data and SVM with the use of iterative accuracy assessment are useful for the mountain communities classification. This can support both Polish and Czech national parks management by giving the information about diversity of communities in the whole transboundary area, helping with identification especially in changing environment caused by humans.

  11. The accuracy of International Classification of Diseases coding for dental problems not associated with trauma in a hospital emergency department.

    Science.gov (United States)

    Figueiredo, Rafael L F; Singhal, Sonica; Dempster, Laura; Hwang, Stephen W; Quinonez, Carlos

    2015-01-01

    Emergency department (ED) visits for nontraumatic dental conditions (NTDCs) may be a sign of unmet need for dental care. The objective of this study was to determine the accuracy of the International Classification of Diseases codes (ICD-10-CA) for ED visits for NTDC. ED visits in 2008-2099 at one hospital in Toronto were identified if the discharge diagnosis in the administrative database system was an ICD-10-CA code for a NTDC (K00-K14). A random sample of 100 visits was selected, and the medical records for these visits were reviewed by a dentist. The description of the clinical signs and symptoms were evaluated, and a diagnosis was assigned. This diagnosis was compared with the diagnosis assigned by the physician and the code assigned to the visit. The 100 ED visits reviewed were associated with 16 different ICD-10-CA codes for NTDC. Only 2 percent of these visits were clearly caused by trauma. The code K0887 (toothache) was the most frequent diagnostic code (31 percent). We found 43.3 percent disagreement on the discharge diagnosis reported by the physician, and 58.0 percent disagreement on the code in the administrative database assigned by the abstractor, compared with what it was suggested by the dentist reviewing the chart. There are substantial discrepancies between the ICD-10-CA diagnosis assigned in administrative databases and the diagnosis assigned by a dentist reviewing the chart retrospectively. However, ICD-10-CA codes can be used to accurately identify ED visits for NTDC. © 2015 American Association of Public Health Dentistry.

  12. Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients

    International Nuclear Information System (INIS)

    Iyengar, S.S.; Morgan-Hughes, G.; Ukoumunne, O.; Clayton, B.; Davies, E.J.; Nikolaou, V.; Hyde, C.J.; Shore, A.C.; Roobottom, C.A.

    2016-01-01

    Aim: To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Materials and methods: Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥50%) stenosis and secondarily as the presence of severe (≥70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. Results: No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. Conclusion: The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. - Highlights: • Diagnostic accuracy of High-Definition CT Angiography (HD-CTCA) has been assessed. • Invasive Coronary angiography (ICA) is the reference standard. • Diagnostic accuracy of HD-CTCA is comparable to ICA. • Diagnostic accuracy is not affected by coronary calcium or stents. • HD-CTCA provides a non-invasive alternative in high-risk patients.

  13. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  14. Methodology for GPS Synchronization Evaluation with High Accuracy

    OpenAIRE

    Li Zan; Braun Torsten; Dimitrova Desislava

    2015-01-01

    Clock synchronization in the order of nanoseconds is one of the critical factors for time based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper we are particularly interested in GPS based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Ou...

  15. Methodology for GPS Synchronization Evaluation with High Accuracy

    OpenAIRE

    Li, Zan; Braun, Torsten; Dimitrova, Desislava Cvetanova

    2015-01-01

    Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. O...

  16. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    Science.gov (United States)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  17. HYBRID OPTIMIZATION OF OBJECT-BASED CLASSIFICATION IN HIGH-RESOLUTION IMAGES USING CONTINOUS ANT COLONY ALGORITHM WITH EMPHASIS ON BUILDING DETECTION

    Directory of Open Access Journals (Sweden)

    E. Tamimi

    2017-09-01

    Full Text Available Automatic building detection from High Spatial Resolution (HSR images is one of the most important issues in Remote Sensing (RS. Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object. These showed the superiority of the proposed method in terms of time and accuracy.

  18. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  19. The accuracy of echocardiography versus surgical and pathological classification of patients with ruptured mitral chordae tendineae: a large study in a Chinese cardiovascular center

    Science.gov (United States)

    2011-01-01

    Background The accuracy of echocardiography versus surgical and pathological classification of patients with ruptured mitral chordae tendineae (RMCT) has not yet been investigated with a large study. Methods Clinical, hemodynamic, surgical, and pathological findings were reviewed for 242 patients with a preoperative diagnosis of RMCT that required mitral valvular surgery. Subjects were consecutive in-patients at Fuwai Hospital in 2002-2008. Patients were evaluated by thoracic echocardiography (TTE) and transesophageal echocardiography (TEE). RMCT cases were classified by location as anterior or posterior, and classified by degree as partial or complete RMCT, according to surgical findings. RMCT cases were also classified by pathology into four groups: myxomatous degeneration, chronic rheumatic valvulitis (CRV), infective endocarditis and others. Results Echocardiography showed that most patients had a flail mitral valve, moderate to severe mitral regurgitation, a dilated heart chamber, mild to moderate pulmonary artery hypertension and good heart function. The diagnostic accuracy for RMCT was 96.7% for TTE and 100% for TEE compared with surgical findings. Preliminary experiments demonstrated that the sensitivity and specificity of diagnosing anterior, posterior and partial RMCT were high, but the sensitivity of diagnosing complete RMCT was low. Surgical procedures for RMCT depended on the location of ruptured chordae tendineae, with no relationship between surgical procedure and complete or partial RMCT. The echocardiographic characteristics of RMCT included valvular thickening, extended subvalvular chordae, echo enhancement, abnormal echo or vegetation, combined with aortic valve damage in the four groups classified by pathology. The incidence of extended subvalvular chordae in the myxomatous group was higher than that in the other groups, and valve thickening in combination with AV damage in the CRV group was higher than that in the other groups. Infective

  20. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    Science.gov (United States)

    Heidarinejad, Mohammad

    This dissertation develops rapid and accurate building energy simulations based on a building classification that identifies and focuses modeling efforts on most significant heat transfer processes. The building classification identifies energy use patterns and their contributing parameters for a portfolio of buildings. The dissertation hypothesis is "Building classification can provide minimal required inputs for rapid and accurate energy simulations for a large number of buildings". The critical literature review indicated there is lack of studies to (1) Consider synoptic point of view rather than the case study approach, (2) Analyze influence of different granularities of energy use, (3) Identify key variables based on the heat transfer processes, and (4) Automate the procedure to quantify model complexity with accuracy. Therefore, three dissertation objectives are designed to test out the dissertation hypothesis: (1) Develop different classes of buildings based on their energy use patterns, (2) Develop different building energy simulation approaches for the identified classes of buildings to quantify tradeoffs between model accuracy and complexity, (3) Demonstrate building simulation approaches for case studies. Penn State's and Harvard's campus buildings as well as high performance LEED NC office buildings are test beds for this study to develop different classes of buildings. The campus buildings include detailed chilled water, electricity, and steam data, enabling to classify buildings into externally-load, internally-load, or mixed-load dominated. The energy use of the internally-load buildings is primarily a function of the internal loads and their schedules. Externally-load dominated buildings tend to have an energy use pattern that is a function of building construction materials and outdoor weather conditions. However, most of the commercial medium-sized office buildings have a mixed-load pattern, meaning the HVAC system and operation schedule dictate

  1. Detection of High-Density Crowds in Aerial Images Using Texture Classification

    Directory of Open Access Journals (Sweden)

    Oliver Meynberg

    2016-06-01

    Full Text Available Automatic crowd detection in aerial images is certainly a useful source of information to prevent crowd disasters in large complex scenarios of mass events. A number of publications employ regression-based methods for crowd counting and crowd density estimation. However, these methods work only when a correct manual count is available to serve as a reference. Therefore, it is the objective of this paper to detect high-density crowds in aerial images, where counting– or regression–based approaches would fail. We compare two texture–classification methodologies on a dataset of aerial image patches which are grouped into ranges of different crowd density. These methodologies are: (1 a Bag–of–words (BoW model with two alternative local features encoded as Improved Fisher Vectors and (2 features based on a Gabor filter bank. Our results show that a classifier using either BoW or Gabor features can detect crowded image regions with 97% classification accuracy. In our tests of four classes of different crowd-density ranges, BoW–based features have a 5%–12% better accuracy than Gabor.

  2. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    Science.gov (United States)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  3. Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's future science and exploratory missions will require much lighter, smaller, and longer life rate sensors that can provide high accuracy navigational...

  4. High Accuracy Positioning using Jet Thrusters for Quadcopter

    Directory of Open Access Journals (Sweden)

    Pi ChenHuan

    2018-01-01

    Full Text Available A quadcopter is equipped with four additional jet thrusters on its horizontal plane and vertical to each other in order to improve the maneuverability and positioning accuracy of quadcopter. A dynamic model of the quadcopter with jet thrusters is derived and two controllers are implemented in simulation, one is a dual loop state feedback controller for pose control and another is an auxiliary jet thruster controller for accurate positioning. Step response simulations showed that the jet thruster can control the quadcopter with less overshoot compared to the conventional one. Over 10s loiter simulation with disturbance, the quadcopter with jet thruster decrease 85% of RMS error of horizontal disturbance compared to a conventional quadcopter with only a dual loop state feedback controller. The jet thruster controller shows the possibility for further accurate in the field of quadcopter positioning.

  5. High-accuracy contouring using projection moiré

    Science.gov (United States)

    Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.

    2005-09-01

    Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.

  6. Chicago classification criteria of esophageal motility disorders defined in high resolution esophageal pressure topography

    NARCIS (Netherlands)

    Bredenoord, A. J.; Fox, M.; Kahrilas, P. J.; Pandolfino, J. E.; Schwizer, W.; Smout, A. J. P. M.; Conklin, Jeffrey L.; Cook, Ian J.; Gyawali, C. Prakash; Hebbard, Geoffrey; Holloway, Richard H.; Ke, Meiyun; Keller, Jutta; Mittal, Ravinder K.; Peters, Jeff; Richter, Joel; Roman, Sabine; Rommel, Nathalie; Sifrim, Daniel; Tutuian, Radu; Valdovinos, Miguel; Vela, Marcelo F.; Zerbib, Frank

    2012-01-01

    Background The Chicago Classification of esophageal motility was developed to facilitate the interpretation of clinical high resolution esophageal pressure topography (EPT) studies, concurrent with the widespread adoption of this technology into clinical practice. The Chicago Classification has been

  7. Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas

    Science.gov (United States)

    Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi

    2016-06-01

    The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  8. MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  9. Improving urban land use and land cover classification from high-spatial-resolution hyperspectral imagery using contextual information

    Science.gov (United States)

    Yang, He; Ma, Ben; Du, Qian; Yang, Chenghai

    2010-08-01

    In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified class pairs, such as roof and trail, road and roof. These classes may be difficult to be separated because they may have similar spectral signatures and their spatial features are not distinct enough to help their discrimination. In addition, misclassification incurred from within-class trivial spectral variation can be corrected by using pixel connectivity information in a local window so that spectrally homogeneous regions can be well preserved. Our experimental results demonstrate the efficiency of the proposed approaches in classification accuracy improvement. The overall performance is competitive to the object-based SVM classification.

  10. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  11. Compact, High Accuracy CO2 Monitor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This Small Business Innovative Research Phase I proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...

  12. Compact, High Accuracy CO2 Monitor, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This Small Business Innovative Research Phase II proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...

  13. High-accuracy Subdaily ERPs from the IGS

    Science.gov (United States)

    Ray, J. R.; Griffiths, J.

    2012-04-01

    Since November 2000 the International GNSS Service (IGS) has published Ultra-rapid (IGU) products for near real-time (RT) and true real-time applications. They include satellite orbits and clocks, as well as Earth rotation parameters (ERPs) for a sliding 48-hr period. The first day of each update is based on the most recent GPS and GLONASS observational data from the IGS hourly tracking network. At the time of release, these observed products have an initial latency of 3 hr. The second day of each update consists of predictions. So the predictions between about 3 and 9 hr into the second half are relevant for true RT uses. Originally updated twice daily, the IGU products since April 2004 have been issued every 6 hr, at 3, 9, 15, and 21 UTC. Up to seven Analysis Centers (ACs) contribute to the IGU combinations. Two sets of ERPs are published with each IGU update, observed values at the middle epoch of the first half and predicted values at the middle epoch of the second half. The latency of the near RT ERPs is 15 hr while the predicted ERPs, based on projections of each AC's most recent determinations, are issued 9 hr ahead of their reference epoch. While IGU ERPs are issued every 6 hr, each set represents an integrated estimate over the surrounding 24 hr. So successive values are temporally correlated with about 75% of the data being common; this fact should be taken into account in user assimilations. To evaluate the accuracy of these near RT and predicted ERPs, they have been compared to the IGS Final ERPs, available about 11 to 17 d after data collection. The IGU products improved dramatically in the earlier years but since about 2008.0 the performance has been stable and excellent. During the last three years, RMS differences for the observed IGU ERPs have been about 0.036 mas and 0.0101 ms for each polar motion component and LOD respectively. (The internal precision of the reference IGS ERPs over the same period is about 0.016 mas for polar motion and 0

  14. Fusion of shallow and deep features for classification of high-resolution remote sensing images

    Science.gov (United States)

    Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang

    2018-02-01

    Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.

  15. Accuracy of Handheld Blood Glucose Meters at High Altitude

    NARCIS (Netherlands)

    de Mol, Pieter; Krabbe, Hans G.; de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.

    2010-01-01

    Background: Due to increasing numbers of people with diabetes taking part in extreme sports (e. g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior

  16. Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to develop a compact, highly innovative Inertial Reference/Measurement Unit (IRU/IMU) that pushes the state-of-the-art in high accuracy performance...

  17. Assessment of fatty degeneration of the gluteal muscles in patients with THA using MRI: reliability and accuracy of the Goutallier and quartile classification systems.

    Science.gov (United States)

    Engelken, Florian; Wassilew, Georgi I; Köhlitz, Torsten; Brockhaus, Sebastian; Hamm, Bernd; Perka, Carsten; Diederichs, und Gerd

    2014-01-01

    The purpose of this study was to quantify the performance of the Goutallier classification for assessing fatty degeneration of the gluteus muscles from magnetic resonance (MR) images and to compare its performance to a newly proposed system. Eighty-four hips with clinical signs of gluteal insufficiency and 50 hips from asymptomatic controls were analyzed using a standard classification system (Goutallier) and a new scoring system (Quartile). Interobserver reliability and intraobserver repeatability were determined, and accuracy was assessed by comparing readers' scores with quantitative estimates of the proportion of intramuscular fat based on MR signal intensities (gold standard). The existing Goutallier classification system and the new Quartile system performed equally well in assessing fatty degeneration of the gluteus muscles, both showing excellent levels of interrater and intrarater agreement. While the Goutallier classification system has the advantage of being widely known, the benefit of the Quartile system is that it is based on more clearly defined grades of fatty degeneration. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Reduced Set of Virulence Genes Allows High Accuracy Prediction of Bacterial Pathogenicity in Humans

    Science.gov (United States)

    Iraola, Gregorio; Vazquez, Gustavo; Spangenberg, Lucía; Naya, Hugo

    2012-01-01

    Although there have been great advances in understanding bacterial pathogenesis, there is still a lack of integrative information about what makes a bacterium a human pathogen. The advent of high-throughput sequencing technologies has dramatically increased the amount of completed bacterial genomes, for both known human pathogenic and non-pathogenic strains; this information is now available to investigate genetic features that determine pathogenic phenotypes in bacteria. In this work we determined presence/absence patterns of different virulence-related genes among more than finished bacterial genomes from both human pathogenic and non-pathogenic strains, belonging to different taxonomic groups (i.e: Actinobacteria, Gammaproteobacteria, Firmicutes, etc.). An accuracy of 95% using a cross-fold validation scheme with in-fold feature selection is obtained when classifying human pathogens and non-pathogens. A reduced subset of highly informative genes () is presented and applied to an external validation set. The statistical model was implemented in the BacFier v1.0 software (freely available at ), that displays not only the prediction (pathogen/non-pathogen) and an associated probability for pathogenicity, but also the presence/absence vector for the analyzed genes, so it is possible to decipher the subset of virulence genes responsible for the classification on the analyzed genome. Furthermore, we discuss the biological relevance for bacterial pathogenesis of the core set of genes, corresponding to eight functional categories, all with evident and documented association with the phenotypes of interest. Also, we analyze which functional categories of virulence genes were more distinctive for pathogenicity in each taxonomic group, which seems to be a completely new kind of information and could lead to important evolutionary conclusions. PMID:22916122

  19. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  20. Impact of a highly detailed emission inventory on modeling accuracy

    Science.gov (United States)

    Taghavi, M.; Cautenet, S.; Arteta, J.

    2005-03-01

    During Expérience sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions (ESCOMPTE) campaign (June 10 to July 14, 2001), two pollution events observed during an intensive measurement period (IOP2a and IOP2b) have been simulated. The comprehensive Regional Atmospheric Modeling Systems (RAMS) model, version 4.3, coupled online with a chemical module including 29 species is used to follow the chemistry of a polluted zone over Southern France. This online method takes advantage of a parallel code and use of the powerful computer SGI 3800. Runs are performed with two emission inventories: the Emission Pre Inventory (EPI) and the Main Emission Inventory (MEI). The latter is more recent and has a high resolution. The redistribution of simulated chemical species (ozone and nitrogen oxides) is compared with aircraft and surface station measurements for both runs at regional scale. We show that the MEI inventory is more efficient than the EPI in retrieving the redistribution of chemical species in space (three-dimensional) and time. In surface stations, MEI is superior especially for primary species, like nitrogen oxides. The ozone pollution peaks obtained from an inventory, such as EPI, have a large uncertainty. To understand the realistic geographical distribution of pollutants and to obtain a good order of magnitude in ozone concentration (in space and time), a high-resolution inventory like MEI is necessary. Coupling RAMS-Chemistry with MEI provides a very efficient tool able to simulate pollution plumes even in a region with complex circulations, such as the ESCOMPTE zone.

  1. Analyzing the diagnostic accuracy of the causes of spinal pain at neurology hospital in accordance with the International Classification of Diseases

    Directory of Open Access Journals (Sweden)

    I. G. Mikhailyuk

    2014-01-01

    Full Text Available Spinal pain is of great socioeconomic significance as it is widely prevalent and a common cause of disability. However, the diagnosis of its true causes frequently leads to problems. A study has been conducted to evaluate the accuracy of a clinical diagnosis and its coding in conformity with the International Classification of Diseases. The diagnosis of vertebral osteochondrosis and the hypodiagnosis of nonspecific and nonvertebrogenic pain syndromes have been found to be unreasonably widely used. Ways to solve these problems have been proposed, by applying approaches to diagnosing the causes of spinal pain in accordance with international practice.

  2. Switched-capacitor techniques for high-accuracy filter and ADC design

    NARCIS (Netherlands)

    Quinn, P.J.; Roermund, van A.H.M.

    2007-01-01

    Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor

  3. High accuracy laboratory spectroscopy to support active greenhouse gas sensing

    Science.gov (United States)

    Long, D. A.; Bielska, K.; Cygan, A.; Havey, D. K.; Okumura, M.; Miller, C. E.; Lisak, D.; Hodges, J. T.

    2011-12-01

    Recent carbon dioxide (CO2) remote sensing missions have set precision targets as demanding as 0.25% (1 ppm) in order to elucidate carbon sources and sinks [1]. These ambitious measurement targets will require the most precise body of spectroscopic reference data ever assembled. Active sensing missions will be especially susceptible to subtle line shape effects as the narrow bandwidth of these measurements will greatly limit the number of spectral transitions which are employed in retrievals. In order to assist these remote sensing missions we have employed frequency-stabilized cavity ring-down spectroscopy (FS-CRDS) [2], a high-resolution, ultrasensitive laboratory technique, to measure precise line shape parameters for transitions of O2, CO2, and other atmospherically-relevant species within the near-infrared. These measurements have led to new HITRAN-style line lists for both 16O2 [3] and rare isotopologue [4] transitions in the A-band. In addition, we have performed detailed line shape studies of CO2 transitions near 1.6 μm under a variety of broadening conditions [5]. We will address recent measurements in these bands as well as highlight recent instrumental improvements to the FS-CRDS spectrometer. These improvements include the use of the Pound-Drever-Hall locking scheme, a high bandwidth servo which enables measurements to be made at rates greater than 10 kHz [6]. In addition, an optical frequency comb will be utilized as a frequency reference, which should allow for transition frequencies to be measured with uncertainties below 10 kHz (3×10-7 cm-1). [1] C. E. Miller, D. Crisp, P. L. DeCola, S. C. Olsen, et al., J. Geophys. Res.-Atmos. 112, D10314 (2007). [2] J. T. Hodges, H. P. Layer, W. W. Miller, G. E. Scace, Rev. Sci. Instrum. 75, 849-863 (2004). [3] D. A. Long, D. K. Havey, M. Okumura, C. E. Miller, et al., J. Quant. Spectrosc. Radiat. Transfer 111, 2021-2036 (2010). [4] D. A. Long, D. K. Havey, S. S. Yu, M. Okumura, et al., J. Quant. Spectrosc

  4. KINEMATIC CLASSIFICATIONS OF LOCAL INTERACTING GALAXIES: IMPLICATIONS FOR THE MERGER/DISK CLASSIFICATIONS AT HIGH-z

    International Nuclear Information System (INIS)

    Hung, Chao-Ling; Larson, Kirsten L.; Sanders, D. B.; Rich, Jeffrey A.; Yuan, Tiantian; Kewley, Lisa J.; Casey, Caitlin M.; Smith, Howard A.; Hayward, Christopher C.

    2015-01-01

    The classification of galaxy mergers and isolated disks is key for understanding the relative importance of galaxy interactions and secular evolution during the assembly of galaxies. Galaxy kinematics as traced by emission lines have been used to suggest the existence of a significant population of high-z star-forming galaxies consistent with isolated rotating disks. However, recent studies have cautioned that post-coalescence mergers may also display disk-like kinematics. To further investigate the robustness of merger/disk classifications based on kinematic properties, we carry out a systematic classification of 24 local (U)LIRGs spanning a range of morphologies: from isolated spiral galaxies, ongoing interacting systems, to fully merged remnants. We artificially redshift the Wide Field Spectrograph observations of these local (U)LIRGs to z = 1.5 to make a realistic comparison with observations at high-z, and also to ensure that all galaxies have the same spatial sampling of ∼900 pc. Using both kinemetry-based and visual classifications, we find that the reliability of kinematic classification shows a strong trend with the interaction stage of galaxies. Mergers with two nuclei and tidal tails have the most distinct kinematics compared to isolated disks, whereas a significant population of the interacting disks and merger remnants are indistinguishable from isolated disks. The high fraction of mergers displaying disk-like kinematics reflects the complexity of the dynamics during galaxy interactions. Additional merger indicators such as morphological properties traced by stars or molecular gas are required to further constrain the merger/disk classifications at high-z

  5. The accuracy of QCD perturbation theory at high energies

    CERN Document Server

    Dalla Brida, Mattia; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2016-01-01

    We discuss the determination of the strong coupling $\\alpha_\\mathrm{\\overline{MS}}^{}(m_\\mathrm{Z})$ or equivalently the QCD $\\Lambda$-parameter. Its determination requires the use of perturbation theory in $\\alpha_s(\\mu)$ in some scheme, $s$, and at some energy scale $\\mu$. The higher the scale $\\mu$ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the $\\Lambda$-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to $\\alpha_s = 0.1$ and below. We find that perturbation theory is very accurate there, yielding a three percent error in the $\\Lambda$-parameter, while data around $\\alpha_s \\approx 0.2$ is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  6. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  7. A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers

    Science.gov (United States)

    Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang

    1990-02-01

    In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.

  8. Contextually guided very-high-resolution imagery classification with semantic segments

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  9. Experimental study on multi-sub-classifier for land cover classification: a case study in Shangri-La, China

    Science.gov (United States)

    Wang, Yan-ying; Wang, Jin-liang; Wang, Ping; Hu, Wen-yin; Su, Shao-hua

    2015-12-01

    High accuracy remote sensed image classification technology is a long-term and continuous pursuit goal of remote sensing applications. In order to evaluate single classification algorithm accuracy, take Landsat TM image as data source, Northwest Yunnan as study area, seven types of land cover classification like Maximum Likelihood Classification has been tested, the results show that: (1)the overall classification accuracy of Maximum Likelihood Classification(MLC), Artificial Neural Network Classification(ANN), Minimum Distance Classification(MinDC) is higher, which is 82.81% and 82.26% and 66.41% respectively; the overall classification accuracy of Parallel Hexahedron Classification(Para), Spectral Information Divergence Classification(SID), Spectral Angle Classification(SAM) is low, which is 37.29%, 38.37, 53.73%, respectively. (2) from each category classification accuracy: although the overall accuracy of the Para is the lowest, it is much higher on grasslands, wetlands, forests, airport land, which is 89.59%, 94.14%, and 89.04%, respectively; the SAM, SID are good at forests classification with higher overall classification accuracy, which is 89.8% and 87.98%, respectively. Although the overall classification accuracy of ANN is very high, the classification accuracy of road, rural residential land and airport land is very low, which is 10.59%, 11% and 11.59% respectively. Other classification methods have their advantages and disadvantages. These results show that, under the same conditions, the same images with different classification methods to classify, there will be a classifier to some features has higher classification accuracy, a classifier to other objects has high classification accuracy, and therefore, we may select multi sub-classifier integration to improve the classification accuracy.

  10. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  11. High accuracy of arterial spin labeling perfusion imaging in differentiation of pilomyxoid from pilocytic astrocytoma

    Energy Technology Data Exchange (ETDEWEB)

    Nabavizadeh, S.A.; Assadsangabi, R.; Hajmomenian, M.; Vossough, A. [Perelman School of Medicine of the University of Pennsylvania, Department of Radiology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States); Santi, M. [Perelman School of Medicine of the University of Pennsylvania, Department of Pathology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States)

    2015-05-01

    Pilomyxoid astrocytoma (PMA) is a relatively new tumor entity which has been added to the 2007 WHO Classification of tumors of the central nervous system. The goal of this study is to utilize arterial spin labeling (ASL) perfusion imaging to differentiate PMA from pilocytic astrocytoma (PA). Pulsed ASL and conventional MRI sequences of patients with PMA and PA in the past 5 years were retrospectively evaluated. Patients with history of radiation or treatment with anti-angiogenic drugs were excluded. A total of 24 patients (9 PMA, 15 PA) were included. There were statistically significant differences between PMA and PA in mean tumor/gray matter (GM) cerebral blood flow (CBF) ratios (1.3 vs 0.4, p < 0.001) and maximum tumor/GM CBF ratio (2.3 vs 1, p < 0.001). Area under the receiver operating characteristic (ROC) curves for differentiation of PMA from PA was 0.91 using mean tumor CBF, 0.95 using mean tumor/GM CBF ratios, and 0.89 using maximum tumor/GM CBF. Using a threshold value of 0.91, the mean tumor/GM CBF ratio was able to diagnose PMA with 77 % sensitivity, 100 % specificity, and a threshold value of 0.7, provided 88 % sensitivity and 86 % specificity. There was no statistically significant difference between the two tumors in enhancement pattern (p = 0.33), internal architecture (p = 0.15), or apparent diffusion coefficient (ADC) values (p = 0.07). ASL imaging has high accuracy in differentiating PMA from PA. The result of this study may have important applications in prognostication and treatment planning especially in patients with less accessible tumors such as hypothalamic-chiasmatic gliomas. (orig.)

  12. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    Science.gov (United States)

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The…

  13. High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS

    OpenAIRE

    O.S.Kushnir; O.A.Bevz; O.G.Vlokh

    2000-01-01

    Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.

  14. The Utilization of Classifications in High-Energy Astrophysics Experiments

    Science.gov (United States)

    Atwood, Bill

    2012-03-01

    The history of high-energy gamma observations stretches back several decades. But it was with the launch of the Energetic Gamma Ray Experiment Telescope (EGRET) in 1991 onboard the Compton Gamma Ray Observatory (CGRO) [1], that the field entered a new era of discovery. At the high-energy end of the electromagnetic spectrum, incoming particles of light, photons, interact with matter mainly by producing electron-positron pairs and this process dominates above an energy of 10-30MeV depending on the material. To a high degree the directionality of the incoming gamma ray is reflected in the e+ and e-, and hence the detection of the trajectories of the e+e- pair can be used to infer the direction of the originating photon. Measuring these high-energy charged particles is the domain of high-energy particle physics and so it should be of little surprise that particle physicists played a significant role in the design and construction of EGRET, as well as the design and implementation of analysis methods for the resulting data. Prior to EGRET, only a handful of sources in the sky were known as high-energy gamma-ray emitters. During EGRET's 9-years mission the final catalog included over 270 sources including new types such as Gamma Ray Bursts (GRBs). This set the stage for the next-generation mission, the Gamma ray Large Area Space Telescope (GLAST) [2]. Very early in the EGRET mission, the realization that the high-energy gamma-ray sky was extremely interesting led to a competition to develop the next-generation instruments. The technology used in EGRET was frozen in the late 1970s and by 1992, enormous advances had been made in experimental particle physics. In particular the effort to develop solid state detectors, targeted for use at the Super Conducting Super Collider (SSC), had made the technology of silicon strip detectors (SSDs) commercially viable for use in large area arrays. Given the limitations imposed by the space environment (e.g., operate in a vacuum, scarce

  15. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  16. Classification of semiurban landscapes from very high-resolution satellite images using a regionalized multiscale segmentation approach

    Science.gov (United States)

    Kavzoglu, Taskin; Erdemir, Merve Yildiz; Tonbul, Hasan

    2017-07-01

    In object-based image analysis, obtaining representative image objects is an important prerequisite for a successful image classification. The major threat is the issue of scale selection due to the complex spatial structure of landscapes portrayed as an image. This study proposes a two-stage approach to conduct regionalized multiscale segmentation. In the first stage, an initial high-level segmentation is applied through a "broadscale," and a set of image objects characterizing natural borders of the landscape features are extracted. Contiguous objects are then merged to create regions by considering their normalized difference vegetation index resemblance. In the second stage, optimal scale values are estimated for the extracted regions, and multiresolution segmentation is applied with these settings. Two satellite images with different spatial and spectral resolutions were utilized to test the effectiveness of the proposed approach and its transferability to different geographical sites. Results were compared to those of image-based single-scale segmentation and it was found that the proposed approach outperformed the single-scale segmentations. Using the proposed methodology, significant improvement in terms of segmentation quality and classification accuracy (up to 5%) was achieved. In addition, the highest classification accuracies were produced using fine-scale values.

  17. Retrospective assessment of interobserver agreement and accuracy in classifications and measurements in subsolid nodules with solid components less than 8mm: which window setting is better?

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Roh-Eul [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of); Hwang, Eui Jin; Yoon, Soon Ho; Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Ahn, Soyeon [Seoul National University Bundang Hospital, Medical Research Collaborating Center, Seongnam-si (Korea, Republic of)

    2017-04-15

    To compare interobserver agreements among multiple readers and accuracy for the assessment of solid components in subsolid nodules between the lung and mediastinal window settings. Seventy-seven surgically resected nodules with solid components smaller than 8 mm were included in this study. In both lung and mediastinal windows, five readers independently assessed the presence and size of solid component. Bootstrapping was used to compare the interobserver agreement between the two window settings. Imaging-pathology correlation was performed to evaluate the accuracy. There were no significant differences in the interobserver agreements between the two windows for both identification (lung windows, k = 0.51; mediastinal windows, k = 0.57) and measurements (lung windows, ICC = 0.70; mediastinal windows, ICC = 0.69) of solid components. The incidence of false negative results for the presence of invasive components and the median absolute difference between the solid component size and the invasive component size were significantly higher on mediastinal windows than on lung windows (P < 0.001 and P < 0.001, respectively). The lung window setting had a comparable reproducibility but a higher accuracy than the mediastinal window setting for nodule classifications and solid component measurements in subsolid nodules. (orig.)

  18. Quantification of CT images for the classification of high- and low-risk pancreatic cysts

    Science.gov (United States)

    Gazit, Lior; Chakraborty, Jayasree; Attiyeh, Marc; Langdon-Embry, Liana; Allen, Peter J.; Do, Richard K. G.; Simpson, Amber L.

    2017-03-01

    Pancreatic cancer is the most lethal cancer with an overall 5-year survival rate of 7%1 due to the late stage at diagnosis and the ineffectiveness of current therapeutic strategies. Given the poor prognosis, early detection at a pre-cancerous stage is the best tool for preventing this disease. Intraductal papillary mucinous neoplasms (IPMN), cystic tumors of the pancreas, represent the only radiographically identifiable precursor lesion of pancreatic cancer and are known to evolve stepwise from low-to-high-grade dysplasia before progressing into an invasive carcinoma. Observation is usually recommended for low-risk (low- and intermediate-grade dysplasia) patients, while high-risk (high-grade dysplasia and invasive carcinoma) patients undergo resection; hence, patient selection is critically important in the management of pancreatic cysts.2 Radiologists use standard criteria such as main pancreatic duct size, cyst size, or presence of a solid enhancing component in the cyst to optimally select patients for surgery.3 However, these findings are subject to a radiologist's interpretation and have been shown to be inconsistent with regards to the presence of a mural nodule or solid component.4 We propose objective classification of risk groups based on quantitative imaging features extracted from CT scans. We apply new features that represent the solid component (i.e. areas of high intensity) within the cyst and extract standard texture features. An adaptive boost classifier5 achieves the best performance with area under receiver operating characteristic curve (AUC) of 0.73 and accuracy of 77.3% for texture features. The random forest classifier achieves the best performance with AUC of 0.71 and accuracy of 70.8% with the solid component features.

  19. Classification Methods for High-Dimensional Genetic Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 34, č. 1 (2014), s. 10-18 ISSN 0208-5216 Institutional support: RVO:67985807 Keywords : multivariate statistics * classification analysis * shrinkage estimation * dimension reduction * data mining Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.646, year: 2014

  20. The effect of pattern overlap on the accuracy of high resolution electron backscatter diffraction measurements

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2015-08-15

    High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.

  1. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    Science.gov (United States)

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  2. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    Science.gov (United States)

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  3. School Socioeconomic Classification, Funding, and the New Jersey High School Proficiency Assessment (HSPA)

    Science.gov (United States)

    Bao, D. H.; Romeo, George C.; Harvey, Roberta

    2010-01-01

    This study examines the relationship between educational effectiveness, as measured by the New Jersey High School Proficiency Assessment (HSPA), and funding of school districts based on socioeconomic classification. Results indicate there is a strong relationship between performance in HSPA, socioeconomic classification, and the different sources…

  4. Electrode replacement does not affect classification accuracy in dual-session use of a passive brain-computer interface for assessing cognitive workload

    Directory of Open Access Journals (Sweden)

    Justin Ronald Estepp

    2015-03-01

    Full Text Available The passive brain-computer interface (pBCI framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neurophysiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors.

  5. Electrode replacement does not affect classification accuracy in dual-session use of a passive brain-computer interface for assessing cognitive workload.

    Science.gov (United States)

    Estepp, Justin R; Christensen, James C

    2015-01-01

    The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors.

  6. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    Science.gov (United States)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  7. High-Performance Neural Networks for Visual Object Classification

    OpenAIRE

    Cireşan, Dan C.; Meier, Ueli; Masci, Jonathan; Gambardella, Luca M.; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better ...

  8. Computer-aided diagnosis of lung cancer: the effect of training data sets on classification accuracy of lung nodules

    Science.gov (United States)

    Gong, Jing; Liu, Ji-Yu; Sun, Xi-Wen; Zheng, Bin; Nie, Sheng-Dong

    2018-02-01

    This study aims to develop a computer-aided diagnosis (CADx) scheme for classification between malignant and benign lung nodules, and also assess whether CADx performance changes in detecting nodules associated with early and advanced stage lung cancer. The study involves 243 biopsy-confirmed pulmonary nodules. Among them, 76 are benign, 81 are stage I and 86 are stage III malignant nodules. The cases are separated into three data sets involving: (1) all nodules, (2) benign and stage I malignant nodules, and (3) benign and stage III malignant nodules. A CADx scheme is applied to segment lung nodules depicted on computed tomography images and we initially computed 66 3D image features. Then, three machine learning models namely, a support vector machine, naïve Bayes classifier and linear discriminant analysis, are separately trained and tested by using three data sets and a leave-one-case-out cross-validation method embedded with a Relief-F feature selection algorithm. When separately using three data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.94, 0.90 and 0.99, respectively. When using the classifiers trained using data sets with all nodules, average AUC values are 0.88 and 0.99 for detecting early and advanced stage nodules, respectively. AUC values computed from three classifiers trained using the same data set are consistent without statistically significant difference (p  >  0.05). This study demonstrates (1) the feasibility of applying a CADx scheme to accurately distinguish between benign and malignant lung nodules, and (2) a positive trend between CADx performance and cancer progression stage. Thus, in order to increase CADx performance in detecting subtle and early cancer, training data sets should include more diverse early stage cancer cases.

  9. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    Science.gov (United States)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the

  10. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  11. Accuracy of hiatal hernia detection with esophageal high-resolution manometry

    NARCIS (Netherlands)

    Weijenborg, P. W.; van Hoeij, F. B.; Smout, A. J. P. M.; Bredenoord, A. J.

    2015-01-01

    The diagnosis of a sliding hiatal hernia is classically made with endoscopy or barium esophagogram. Spatial separation of the lower esophageal sphincter (LES) and diaphragm, the hallmark of hiatal hernia, can also be observed on high-resolution manometry (HRM), but the diagnostic accuracy of this

  12. High-accuracy identification and bioinformatic analysis of in vivo protein phosphorylation sites in yeast

    DEFF Research Database (Denmark)

    Gnad, Florian; de Godoy, Lyris M F; Cox, Jürgen

    2009-01-01

    Protein phosphorylation is a fundamental regulatory mechanism that affects many cell signaling processes. Using high-accuracy MS and stable isotope labeling in cell culture-labeling, we provide a global view of the Saccharomyces cerevisiae phosphoproteome, containing 3620 phosphorylation sites ma...

  13. High accuracy positioning using carrier-phases with the opensource GPSTK software

    OpenAIRE

    Salazar Hernández, Dagoberto José; Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume

    2008-01-01

    The objective of this work is to show how using a proper GNSS data management strategy, combined with the flexibility provided by the open source "GPS Toolkit" (GPSTk), it is possible to easily develop both simple code-based processing strategies as well as basic high accuracy carrier-phase positioning techniques like Precise Point Positioning (PPP

  14. Very high-accuracy calibration of radiation pattern and gain of a near-field probe

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav

    2014-01-01

    In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ...

  15. From journal to headline: the accuracy of climate science news in Danish high quality newspapers

    DEFF Research Database (Denmark)

    Vestergård, Gunver Lystbæk

    2011-01-01

    analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...

  16. Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel

    Science.gov (United States)

    Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie

    2011-05-01

    Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.

  17. Classification of Ultra-High Resolution Orthophotos Combined with DSM Using a Dual Morphological Top Hat Profile

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2015-12-01

    Full Text Available New aerial sensors and platforms (e.g., unmanned aerial vehicles (UAVs are capable of providing ultra-high resolution remote sensing data (less than a 30-cm ground sampling distance (GSD. This type of data is an important source for interpreting sub-building level objects; however, it has not yet been explored. The large-scale differences of urban objects, the high spectral variability and the large perspective effect bring difficulties to the design of descriptive features. Therefore, features representing the spatial information of the objects are essential for dealing with the spectral ambiguity. In this paper, we proposed a dual morphology top-hat profile (DMTHP using both morphology reconstruction and erosion with different granularities. Due to the high dimensional feature space, we have proposed an adaptive scale selection procedure to reduce the feature dimension according to the training samples. The DMTHP is extracted from both images and Digital Surface Models (DSM to obtain complimentary information. The random forest classifier is used to classify the features hierarchically. Quantitative experimental results on aerial images with 9-cm and UAV images with 5-cm GSD are performed. Under our experiments, improvements of 10% and 2% in overall accuracy are obtained in comparison with the well-known differential morphological profile (DMP feature, and superior performance is observed over other tested features. Large format data with 20,000 × 20,000 pixels are used to perform a qualitative experiment using the proposed method, which shows its promising potential. The experiments also demonstrate that the DSM information has greatly enhanced the classification accuracy. In the best case in our experiment, it gives rise to a classification accuracy from 63.93% (spectral information only to 94.48% (the proposed method.

  18. High accuracy interface characterization of three phase material systems in three dimensions

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus

    2010-01-01

    Quantification of interface properties such as two phase boundary area and triple phase boundary length is important in the characterization ofmanymaterial microstructures, in particular for solid oxide fuel cell electrodes. Three-dimensional images of these microstructures can be obtained...... by tomography schemes such as focused ion beam serial sectioning or micro-computed tomography. We present a high accuracy method of calculating two phase surface areas and triple phase length of triple phase systems from subvoxel accuracy segmentations of constituent phases. The method performs a three phase...... polygonization of the interface boundaries which results in a non-manifold mesh of connected faces. We show how the triple phase boundaries can be extracted as connected curve loops without branches. The accuracy of the method is analyzed by calculations on geometrical primitives...

  19. Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation

    Science.gov (United States)

    Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter

    1996-01-01

    The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.

  20. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  2. A rule-learning program in high energy physics event classification

    International Nuclear Information System (INIS)

    Clearwater, S.H.; Stern, E.G.

    1991-01-01

    We have applied a rule-learning program to the problem of event classification in high energy physics. The program searches for event classifications, i.e. rules, and effectively allows an exploration of many more possible classifications than is practical by a physicist. The program, RL4, is particularly useful because it can easily explore multi-dimensional rules as well as rules that may seem non-intuitive at first to the physicist. RL4 is also contrasted with other learning programs. (orig.)

  3. High Accuracy Acoustic Relative Humidity Measurement inDuct Flow with Air

    Directory of Open Access Journals (Sweden)

    Cees van der Geld

    2010-08-01

    Full Text Available An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  4. High accuracy digital aging monitor based on PLL-VCO circuit

    International Nuclear Information System (INIS)

    Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong

    2015-01-01

    As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)

  5. High accuracy acoustic relative humidity measurement in duct flow with air.

    Science.gov (United States)

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  6. Evaluation of the accuracy of the CellaVision™ DM96 in a high HIV-prevalence population in South Africa

    Directory of Open Access Journals (Sweden)

    Jenifer L. Vaughan

    2016-03-01

    Objectives: This study aimed to evaluate the accuracy of the DM96 in a South African laboratory, with emphasis on its performance in samples collected from HIV-positive patients. Methods: A total of 149 samples submitted for a routine differential white cell count in 2012 and 2013 at the Chris Hani Baragwanath Academic Hospital in Johannesburg, South Africa were included, of which 79 (53.0% were collected from HIV-positive patients. Results of DM96 analysis pre- and post-classification were compared with a manual differential white cell count and the impact of HIV infection and other variables of interest were assessed. Results: Pre- and post-classification accuracies were similar to those reported in developed countries. Reclassification was required in 16% of cells, with particularly high misclassification rates for eosinophils (31.7%, blasts (33.7% and basophils (93.5%. Multivariate analysis revealed a significant relationship between the number of misclassified cells and both the white cell count (p = 0.035 and the presence of malignant cells in the blood (p = 0.049, but not with any other variables analysed, including HIV status. Conclusion: The DM96 exhibited acceptable accuracy in this South African laboratory, which was not impacted by HIV infection. However, as it does not eliminate the need for experienced morphologists, its cost may be unjustifiable in a resource-constrained setting.

  7. A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.

    Science.gov (United States)

    Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G

    2017-02-01

    The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  8. Administrative database concerns: accuracy of International Classification of Diseases, Ninth Revision coding is poor for preoperative anemia in patients undergoing spinal fusion.

    Science.gov (United States)

    Golinvaux, Nicholas S; Bohl, Daniel D; Basques, Bryce A; Grauer, Jonathan N

    2014-11-15

    Cross-sectional study. To objectively evaluate the ability of International Classification of Diseases, Ninth Revision (ICD-9) codes, which are used as the foundation for administratively coded national databases, to identify preoperative anemia in patients undergoing spinal fusion. National database research in spine surgery continues to rise. However, the validity of studies based on administratively coded data, such as the Nationwide Inpatient Sample, are dependent on the accuracy of ICD-9 coding. Such coding has previously been found to have poor sensitivity to conditions such as obesity and infection. A cross-sectional study was performed at an academic medical center. Hospital-reported anemia ICD-9 codes (those used for administratively coded databases) were directly compared with the chart-documented preoperative hematocrits (true laboratory values). A patient was deemed to have preoperative anemia if the preoperative hematocrit was less than the lower end of the normal range (36.0% for females and 41.0% for males). The study included 260 patients. Of these, 37 patients (14.2%) were anemic; however, only 10 patients (3.8%) received an "anemia" ICD-9 code. Of the 10 patients coded as anemic, 7 were anemic by definition, whereas 3 were not, and thus were miscoded. This equates to an ICD-9 code sensitivity of 0.19, with a specificity of 0.99, and positive and negative predictive values of 0.70 and 0.88, respectively. This study uses preoperative anemia to demonstrate the potential inaccuracies of ICD-9 coding. These results have implications for publications using databases that are compiled from ICD-9 coding data. Furthermore, the findings of the current investigation raise concerns regarding the accuracy of additional comorbidities. Although administrative databases are powerful resources that provide large sample sizes, it is crucial that we further consider the quality of the data source relative to its intended purpose.

  9. Pixel-Wise Classification Method for High Resolution Remote Sensing Imagery Using Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2018-03-01

    Full Text Available Considering the classification of high spatial resolution remote sensing imagery, this paper presents a novel classification method for such imagery using deep neural networks. Deep learning methods, such as a fully convolutional network (FCN model, achieve state-of-the-art performance in natural image semantic segmentation when provided with large-scale datasets and respective labels. To use data efficiently in the training stage, we first pre-segment training images and their labels into small patches as supplements of training data using graph-based segmentation and the selective search method. Subsequently, FCN with atrous convolution is used to perform pixel-wise classification. In the testing stage, post-processing with fully connected conditional random fields (CRFs is used to refine results. Extensive experiments based on the Vaihingen dataset demonstrate that our method performs better than the reference state-of-the-art networks when applied to high-resolution remote sensing imagery classification.

  10. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Science.gov (United States)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  11. High-Accuracy Spherical Near-Field Measurements for Satellite Antenna Testing

    DEFF Research Database (Denmark)

    Breinbjerg, Olav

    2017-01-01

    The spherical near-field antenna measurement technique is unique in combining several distinct advantages and it generally constitutes the most accurate technique for experimental characterization of radiation from antennas. From the outset in 1970, spherical near-field antenna measurements have...... matured into a well-established technique that is widely used for testing antennas for many wireless applications. In particular, for high-accuracy applications, such as remote sensing satellite missions in ESA's Earth Observation Programme with uncertainty requirements at the level of 0.05dB - 0.10d......B, the spherical near-field antenna measurement technique is generally superior. This paper addresses the means to achieving high measurement accuracy; these include the measurement technique per se, its implementation in terms of proper measurement procedures, the use of uncertainty estimates, as well as facility...

  12. A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2011-12-01

    Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.

  13. Identification and delineation of areas flood hazard using high accuracy of DEM data

    Science.gov (United States)

    Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.

    2018-05-01

    Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.

  14. Accuracy of Estimating Highly Eccentric Binary Black Hole Parameters with Gravitational-wave Detections

    Science.gov (United States)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-03-01

    Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).

  15. A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System

    Directory of Open Access Journals (Sweden)

    Guanwu Zhou

    2014-07-01

    Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.

  16. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  17. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  18. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  19. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  20. High-accuracy determination of the neutron flux at n{sub T}OF

    Energy Technology Data Exchange (ETDEWEB)

    Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)

    2013-12-15

    The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)

  1. Fission product model for BWR analysis with improved accuracy in high burnup

    International Nuclear Information System (INIS)

    Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira

    1998-01-01

    A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)

  2. High Accuracy Attitude Control System Design for Satellite with Flexible Appendages

    Directory of Open Access Journals (Sweden)

    Wenya Zhou

    2014-01-01

    Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.

  3. High-accuracy numerical integration of charged particle motion – with application to ponderomotive force

    International Nuclear Information System (INIS)

    Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu

    2016-01-01

    A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)

  4. High-accuracy defect sizing for CRDM penetration adapters using the ultrasonic TOFD technique

    International Nuclear Information System (INIS)

    Atkinson, I.

    1995-01-01

    Ultrasonic time-of-flight diffraction (TOFD) is the preferred technique for critical sizing of throughwall orientated defects in a wide range of components, primarily because it is intrinsically more accurate than amplitude-based techniques. For the same reason, TOFD is the preferred technique for sizing the cracks in control rod drive mechanism (CRDM) penetration adapters, which have been the subject of much recent attention. Once the considerable problem of restricted access for the UT probes has been overcome, this inspection lends itself to very high accuracy defect sizing using TOFD. In qualification trials under industrial conditions, depth sizing to an accuracy of ≤ 0.5 mm has been routinely achieved throughout the full wall thickness (16 mm) of the penetration adapters, using only a single probe pair and without recourse to signal processing. (author)

  5. High accuracy of family history of melanoma in Danish melanoma cases

    DEFF Research Database (Denmark)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-01-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...

  6. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    Science.gov (United States)

    Frey, Bradley J.; Leviton, Douglas B.

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  7. Ship Classification with High Resolution TerraSAR-X Imagery Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhi Zhao

    2013-01-01

    Full Text Available Ship surveillance using space-borne synthetic aperture radar (SAR, taking advantages of high resolution over wide swaths and all-weather working capability, has attracted worldwide attention. Recent activity in this field has concentrated mainly on the study of ship detection, but the classification is largely still open. In this paper, we propose a novel ship classification scheme based on analytic hierarchy process (AHP in order to achieve better performance. The main idea is to apply AHP on both feature selection and classification decision. On one hand, the AHP based feature selection constructs a selection decision problem based on several feature evaluation measures (e.g., discriminability, stability, and information measure and provides objective criteria to make comprehensive decisions for their combinations quantitatively. On the other hand, we take the selected feature sets as the input of KNN classifiers and fuse the multiple classification results based on AHP, in which the feature sets’ confidence is taken into account when the AHP based classification decision is made. We analyze the proposed classification scheme and demonstrate its results on a ship dataset that comes from TerraSAR-X SAR images.

  8. High resolution esophageal manometry--the switch from "intuitive" visual interpretation to Chicago classification.

    Science.gov (United States)

    Srinivas, M; Balakumaran, T A; Palaniappan, S; Srinivasan, Vijaya; Batcha, M; Venkataraman, Jayanthi

    2014-03-01

    High resolution esophageal manometry (HREM) has been interpreted all along by visual interpretation of color plots until the recent introduction of Chicago classification which categorises HREM using objective measurements. It compares HREM diagnosis of esophageal motor disorders by visual interpretation and Chicago classification. Using software Trace 1.2v, 77 consecutive tracings diagnosed by visual interpretation were re-analyzed by Chicago classification and findings compared for concordance between the two systems of interpretation. Kappa agreement rate between the two observations was determined. There were 57 males (74 %) and cohort median age was 41 years (range: 14-83 years). Majority of the referrals were for gastroesophageal reflux disease, dysphagia and achalasia. By "intuitive" visual interpretation, the tracing were reported as normal in 45 (58.4 %), achalasia 14 (18.2 %), ineffective esophageal motility 3 (3.9 %), nutcracker esophagus 11 (14.3 %) and nonspecific motility changes 4 (5.2 %). By Chicago classification, there was 100 % agreement (Kappa 1) for achalasia (type 1: 9; type 2: 5) and ineffective esophageal motility ("failed peristalsis" on visual interpretation). Normal esophageal motility, nutcracker esophagus and nonspecific motility disorder on visual interpretation were reclassified as rapid contraction and esophagogastric junction (EGJ) outflow obstruction by Chicago classification. Chicago classification identified distinct clinical phenotypes including EGJ outflow obstruction not identified by visual interpretation. A significant number of unclassified HREM by visual interpretation were also classified by it.

  9. Conjugate-Gradient Neural Networks in Classification of Multisource and Very-High-Dimensional Remote Sensing Data

    Science.gov (United States)

    Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.

    1993-01-01

    Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.

  10. High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry

    Directory of Open Access Journals (Sweden)

    Guy Jean-Pierre Schumann

    2016-01-01

    Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.

  11. Accuracy of cell calculation methods used for analysis of high conversion light water reactor lattice

    International Nuclear Information System (INIS)

    Jeong, Chang-Joon; Okumura, Keisuke; Ishiguro, Yukio; Tanaka, Ken-ichi

    1990-01-01

    Validation tests were made for the accuracy of cell calculation methods used in analyses of tight lattices of a mixed-oxide (MOX) fuel core in a high conversion light water reactor (HCLWR). A series of cell calculations was carried out for the lattices referred from an international HCLWR benchmark comparison, with emphasis placed on the resonance calculation methods; the NR, IR approximations, the collision probability method with ultra-fine energy group. Verification was also performed for the geometrical modelling; a hexagonal/cylindrical cell, and the boundary condition; mirror/white reflection. In the calculations, important reactor physics parameters, such as the neutron multiplication factor, the conversion ratio and the void coefficient, were evaluated using the above methods for various HCLWR lattices with different moderator to fuel volume ratios, fuel materials and fissile plutonium enrichments. The calculated results were compared with each other, and the accuracy and applicability of each method were clarified by comparison with continuous energy Monte Carlo calculations. It was verified that the accuracy of the IR approximation became worse when the neutron spectrum became harder. It was also concluded that the cylindrical cell model with the white boundary condition was not so suitable for MOX fuelled lattices, as for UO 2 fuelled lattices. (author)

  12. Accuracy of High-Resolution Ultrasonography in the Detection of Extensor Tendon Lacerations.

    Science.gov (United States)

    Dezfuli, Bobby; Taljanovic, Mihra S; Melville, David M; Krupinski, Elizabeth A; Sheppard, Joseph E

    2016-02-01

    Lacerations to the extensor mechanism are usually diagnosed clinically. Ultrasound (US) has been a growing diagnostic tool for tendon injuries since the 1990s. To date, there has been no publication establishing the accuracy and reliability of US in the evaluation of extensor mechanism lacerations in the hand. The purpose of this study is to determine the accuracy of US to detect extensor tendon injuries in the hand. Sixteen fingers and 4 thumbs in 4 fresh-frozen and thawed cadaveric hands were used. Sixty-eight 0.5-cm transverse skin lacerations were created. Twenty-seven extensor tendons were sharply transected. The remaining skin lacerations were used as sham dissection controls. One US technologist and one fellowship-trained musculoskeletal radiologist performed real-time dynamic US studies in and out of water bath. A second fellowship trained musculoskeletal radiologist subsequently reviewed the static US images. Dynamic and static US interpretation accuracy was assessed using dissection as "truth." All 27 extensor tendon lacerations and controls were identified correctly with dynamic imaging as either injury models that had a transected extensor tendon or sham controls with intact extensor tendons (sensitivity = 100%, specificity = 100%, positive predictive value = 1.0; all significantly greater than chance). Static imaging had a sensitivity of 85%, specificity of 89%, and accuracy of 88% (all significantly greater than chance). The results of the dynamic real time versus static US imaging were clearly different but did not reach statistical significance. Diagnostic US is a very accurate noninvasive study that can identify extensor mechanism injuries. Clinically suspected cases of acute extensor tendon injury scanned by high-frequency US can aid and/or confirm the diagnosis, with dynamic imaging providing added value compared to static. Ultrasonography, to aid in the diagnosis of extensor mechanism lacerations, can be successfully used in a reliable and

  13. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    Science.gov (United States)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  14. Technical accuracy of a neuronavigation system measured with a high-precision mechanical micromanipulator.

    Science.gov (United States)

    Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R

    1997-12-01

    This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.

  15. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    International Nuclear Information System (INIS)

    Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A

    2013-01-01

    Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now

  16. An angle encoder for super-high resolution and super-high accuracy using SelfA

    Science.gov (United States)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-06-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after

  17. An angle encoder for super-high resolution and super-high accuracy using SelfA

    International Nuclear Information System (INIS)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-01-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 2 21 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science and Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 2 33 , that is, corresponding to a 0.0015″ signal period

  18. Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-08-03

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.

  19. Ultra-high accuracy optical testing: creating diffraction-limited short-wavelength optical systems

    International Nuclear Information System (INIS)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman, Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli, Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-01-01

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-(angstrom) and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date

  20. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    Science.gov (United States)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).

  1. The use of high accuracy NAA for the certification of NIST botanical standard reference materials

    International Nuclear Information System (INIS)

    Becker, D.A.; Greenberg, R.R.; Stone, S.F.

    1992-01-01

    Neutron activation analysis is one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). NAA competes favorably with all other techniques because of it's unique capabilities for high accuracy even at very low concentrations for many elements. In this paper, instrumental and radiochemical NAA results are described for 25 elements in two new NIST SRMs, SRM 1515 (Apple Leaves) and SRM 1547 (Peach Leaves), and are compared to the certified values for 19 elements in these two new botanical reference materials. (author) 7 refs.; 4 tabs

  2. High-accuracy critical exponents for O(N) hierarchical 3D sigma models

    International Nuclear Information System (INIS)

    Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.

    2006-01-01

    The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits

  3. High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer

    CERN Multimedia

    2002-01-01

    The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \

  4. A variational nodal diffusion method of high accuracy; Varijaciona nodalna difuziona metoda visoke tachnosti

    Energy Technology Data Exchange (ETDEWEB)

    Tomasevic, Dj; Altiparmarkov, D [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1988-07-01

    A variational nodal diffusion method with accurate treatment of transverse leakage shape is developed and presented in this paper. Using Legendre expansion in transverse coordinates higher order quasi-one-dimensional nodal equations are formulated. Numerical solution has been carried out using analytical solutions in alternating directions assuming Legendre expansion of the RHS term. The method has been tested against 2D and 3D IAEA benchmark problem, as well as 2D CANDU benchmark problem. The results are highly accurate. The first order approximation yields to the same order of accuracy as the standard nodal methods with quadratic leakage approximation, while the second order reaches reference solution. (author)

  5. Normed kernel function-based fuzzy possibilistic C-means (NKFPCM) algorithm for high-dimensional breast cancer database classification with feature selection is based on Laplacian Score

    Science.gov (United States)

    Lestari, A. W.; Rustam, Z.

    2017-07-01

    In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.

  6. A new ultra-high-accuracy angle generator: current status and future direction

    Science.gov (United States)

    Guertin, Christian F.; Geckeler, Ralf D.

    2017-09-01

    Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.

  7. Highly Accurate Classification of Watson-Crick Basepairs on Termini of Single DNA Molecules

    Science.gov (United States)

    Winters-Hilt, Stephen; Vercoutere, Wenonah; DeGuzman, Veronica S.; Deamer, David; Akeson, Mark; Haussler, David

    2003-01-01

    We introduce a computational method for classification of individual DNA molecules measured by an α-hemolysin channel detector. We show classification with better than 99% accuracy for DNA hairpin molecules that differ only in their terminal Watson-Crick basepairs. Signal classification was done in silico to establish performance metrics (i.e., where train and test data were of known type, via single-species data files). It was then performed in solution to assay real mixtures of DNA hairpins. Hidden Markov Models (HMMs) were used with Expectation/Maximization for denoising and for associating a feature vector with the ionic current blockade of the DNA molecule. Support Vector Machines (SVMs) were used as discriminators, and were the focus of off-line training. A multiclass SVM architecture was designed to place less discriminatory load on weaker discriminators, and novel SVM kernels were used to boost discrimination strength. The tuning on HMMs and SVMs enabled biophysical analysis of the captured molecule states and state transitions; structure revealed in the biophysical analysis was used for better feature selection. PMID:12547778

  8. Tumor Classification Using High-Order Gene Expression Profiles Based on Multilinear ICA

    Directory of Open Access Journals (Sweden)

    Ming-gang Du

    2009-01-01

    Full Text Available Motivation. Independent Components Analysis (ICA maximizes the statistical independence of the representational components of a training gene expression profiles (GEP ensemble, but it cannot distinguish relations between the different factors, or different modes, and it is not available to high-order GEP Data Mining. In order to generalize ICA, we introduce Multilinear-ICA and apply it to tumor classification using high order GEP. Firstly, we introduce the basis conceptions and operations of tensor and recommend Support Vector Machine (SVM classifier and Multilinear-ICA. Secondly, the higher score genes of original high order GEP are selected by using t-statistics and tabulate tensors. Thirdly, the tensors are performed by Multilinear-ICA. Finally, the SVM is used to classify the tumor subtypes. Results. To show the validity of the proposed method, we apply it to tumor classification using high order GEP. Though we only use three datasets, the experimental results show that the method is effective and feasible. Through this survey, we hope to gain some insight into the problem of high order GEP tumor classification, in aid of further developing more effective tumor classification algorithms.

  9. High Accuracy, Miniature Pressure Sensor for Very High Temperatures, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SiWave proposes to develop a compact, low-cost MEMS-based pressure sensor for very high temperatures and low pressures in hypersonic wind tunnels. Most currently...

  10. Study on the forward-feed neural network used for the classification of high energy particles

    International Nuclear Information System (INIS)

    Luo Guangxuan; Dai Guiliang

    1997-01-01

    Neural network has been applied in the field of high energy physics experiment for the classification of particles and gained good results. The author emphasizes the systematic analysis of the fundamental principle of the forward-feed neural network and discusses the problems and solving methods in application

  11. Inter- and intrarater reliability of the Chicago Classification in pediatric high-resolution esophageal manometry recordings

    NARCIS (Netherlands)

    Singendonk, M. M. J.; Smits, M. J.; Heijting, I. E.; van Wijk, M. P.; Nurko, S.; Rosen, R.; Weijenborg, P. W.; Abu-Assi, R.; Hoekman, D. R.; Kuizenga-Wessel, S.; Seiboth, G.; Benninga, M. A.; Omari, T. I.; Kritas, S.

    2015-01-01

    The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort.

  12. High-level fusion of depth and intensity for pedestrian classification

    NARCIS (Netherlands)

    Rohrbach, M.; Enzweiler, M.; Gavrila, D.M.

    2009-01-01

    This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propose to extract discriminative spatial features (gradient orientation histograms and local

  13. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  14. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    Science.gov (United States)

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  15. GLOBAL LAND COVER CLASSIFICATION USING MODIS SURFACE REFLECTANCE PROSUCTS

    Directory of Open Access Journals (Sweden)

    K. Fukue

    2016-06-01

    Full Text Available The objective of this study is to develop high accuracy land cover classification algorithm for Global scale by using multi-temporal MODIS land reflectance products. In this study, time-domain co-occurrence matrix was introduced as a classification feature which provides time-series signature of land covers. Further, the non-parametric minimum distance classifier was introduced for timedomain co-occurrence matrix, which performs multi-dimensional pattern matching for time-domain co-occurrence matrices of a classification target pixel and each classification classes. The global land cover classification experiments have been conducted by applying the proposed classification method using 46 multi-temporal(in one year SR(Surface Reflectance and NBAR(Nadir BRDF-Adjusted Reflectance products, respectively. IGBP 17 land cover categories were used in our classification experiments. As the results, SR and NBAR products showed similar classification accuracy of 99%.

  16. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  17. A high accuracy algorithm of displacement measurement for a micro-positioning stage

    Directory of Open Access Journals (Sweden)

    Xiang Zhang

    2017-05-01

    Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.

  18. Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.

    Science.gov (United States)

    Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua

    2011-05-15

    High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.

  19. High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li

    CERN Multimedia

    Le scornet, G

    2002-01-01

    The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.

  20. Computer modeling of oil spill trajectories with a high accuracy method

    International Nuclear Information System (INIS)

    Garcia-Martinez, Reinaldo; Flores-Tovar, Henry

    1999-01-01

    This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)

  1. Treatment accuracy of hypofractionated spine and other highly conformal IMRT treatments

    International Nuclear Information System (INIS)

    Sutherland, B.; Hanlon, P.; Charles, P.

    2011-01-01

    Full text: Spinal cord metastases pose difficult challenges for radiation treatment due to tight dose constraints and a concave PTY. This project aimed to thoroughly test the treatment accuracy of the Eclipse Treatment Planning System (TPS) for highly modulated IMRT treatments, in particular of the thoracic spine, using an Elekta Synergy Linear Accelerator. The increased understanding obtained through different quality assurance techniques allowed recommendations to be made for treatment site commissioning with improved accuracy at the Princess Alexandra Hospital (PAH). Three thoracic spine IMRT plans at the PAH were used for data collection. Complex phantom models were built using CT data, and fields simulated using Monte Carlo modelling. The simulated dose distributions were compared with the TPS using gamma analysis and DYH comparison. High resolution QA was done for all fields using the MatriXX ion chamber array, MapCHECK2 diode array shifted, and the EPlD to determine a procedure for commissioning new treatment sites. Basic spine simulations found the TPS overestimated absorbed dose to bone, however within spinal cord there was good agreement. High resolution QA found the average gamma pass rate of the fields to be 99.1 % for MatriXX, 96.5% for MapCHECK2 shifted and 97.7% for EPlD. Preliminary results indicate agreement between the TPS and delivered dose distributions higher than previously believed for the investigated IMRT plans. The poor resolution of the MatriXX, and normalisation issues with MapCHECK2 leads to probable recommendation of EPlD for future IMRT commissioning due to the high resolution and minimal setup required.

  2. High Accuracy, High Energy He-Erd Analysis of H,C, and T

    International Nuclear Information System (INIS)

    Browning, James F.; Langley, Robert A.; Doyle, Barney L.; Banks, James C.; Wampler, William R.

    1999-01-01

    A new analysis technique using high-energy helium ions for the simultaneous elastic recoil detection of all three hydrogen isotopes in metal hydride systems extending to depths of several microm's is presented. Analysis shows that it is possible to separate each hydrogen isotope in a heavy matrix such as erbium to depths of 5 microm using incident 11.48MeV 4 He 2 ions with a detection system composed of a range foil and ΔE-E telescope detector. Newly measured cross sections for the elastic recoil scattering of 4 He 2 ions from protons and deuterons are presented in the energy range 10 to 11.75 MeV for the laboratory recoil angle of 30degree

  3. PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components

    CERN Document Server

    Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia

    2016-01-01

    The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...

  4. High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be

    CERN Multimedia

    2002-01-01

    State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...

  5. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    Science.gov (United States)

    Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana

    2015-01-01

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.

  6. High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A

    International Nuclear Information System (INIS)

    J. Denard; A. Saha; G. Lavessiere

    2001-01-01

    CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described

  7. Medication adherence assessment: high accuracy of the new Ingestible Sensor System in kidney transplants.

    Science.gov (United States)

    Eisenberger, Ute; Wüthrich, Rudolf P; Bock, Andreas; Ambühl, Patrice; Steiger, Jürg; Intondi, Allison; Kuranoff, Susan; Maier, Thomas; Green, Damian; DiCarlo, Lorenzo; Feutren, Gilles; De Geest, Sabina

    2013-08-15

    This open-label single-arm exploratory study evaluated the accuracy of the Ingestible Sensor System (ISS), a novel technology for directly assessing the ingestion of oral medications and treatment adherence. ISS consists of an ingestible event marker (IEM), a microsensor that becomes activated in gastric fluid, and an adhesive personal monitor (APM) that detects IEM activation. In this study, the IEM was combined to enteric-coated mycophenolate sodium (ECMPS). Twenty stable adult kidney transplants received IEM-ECMPS for a mean of 9.2 weeks totaling 1227 cumulative days. Eight patients prematurely discontinued treatment due to ECMPS gastrointestinal symptoms (n=2), skin intolerance to APM (n=2), and insufficient system usability (n=4). Rash or erythema due to APM was reported in 7 (37%) patients, all during the first month of use. No serious or severe adverse events and no rejection episode were reported. IEM detection accuracy was 100% over 34 directly observed ingestions; Taking Adherence was 99.4% over a total of 2824 prescribed IEM-ECMPS ingestions. ISS could detect accurately the ingestion of two IEM-ECMPS capsules taken at the same time (detection rate of 99.3%, n=2376). ISS is a promising new technology that provides highly reliable measurements of intake and timing of intake of drugs that are combined with the IEM.

  8. Enhancing the Accuracy of Advanced High Temperature Mechanical Testing through Thermography

    Directory of Open Access Journals (Sweden)

    Jonathan Jones

    2018-03-01

    Full Text Available This paper describes the advantages and enhanced accuracy thermography provides to high temperature mechanical testing. This technique is not only used to monitor, but also to control test specimen temperatures where the infra-red technique enables accurate non-invasive control of rapid thermal cycling for non-metallic materials. Isothermal and dynamic waveforms are employed over a 200–800 °C temperature range to pre-oxidised and coated specimens to assess the capability of the technique. This application shows thermography to be accurate to within ±2 °C of thermocouples, a standardised measurement technique. This work demonstrates the superior visibility of test temperatures previously unobtainable by conventional thermocouples or even more modern pyrometers that thermography can deliver. As a result, the speed and accuracy of thermal profiling, thermal gradient measurements and cold/hot spot identification using the technique has increased significantly to the point where temperature can now be controlled by averaging over a specified area. The increased visibility of specimen temperatures has revealed additional unknown effects such as thermocouple shadowing, preferential crack tip heating within an induction coil, and, fundamental response time of individual measurement techniques which are investigated further.

  9. An output amplitude configurable wideband automatic gain control with high gain step accuracy

    International Nuclear Information System (INIS)

    He Xiaofeng; Ye Tianchun; Mo Taishan; Ma Chengyan

    2012-01-01

    An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented. The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs. And what's more, the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy. A zero, which is composed by the source feedback resistance and the source capacity, is introduced to compensate for the pole. The AGC is fabricated in a 0.18 μm CMOS process. The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB. The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA, and the die area is 800 × 300 μm 2 . (semiconductor integrated circuits)

  10. High accuracy of family history of melanoma in Danish melanoma cases.

    Science.gov (United States)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-12-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree relatives and found only 1 case of melanoma which was not reported in a 3 case melanoma family. Melanoma patients in Denmark report family history of melanoma in first and second degree relatives with a high level of accuracy with a true positive predictive value between 77 and 87%. In 99% of probands reporting a negative family history of melanoma in first degree relatives this information is correct. In clinical practice we recommend that melanoma diagnosis in relatives should be verified if possible, but even unverified reported melanoma cases in relatives should be included in the indication of genetic testing and assessment of melanoma risk in the family.

  11. High accuracy Primary Reference gas Mixtures for high-impact greenhouse gases

    Science.gov (United States)

    Nieuwenkamp, Gerard; Zalewska, Ewelina; Pearce-Hill, Ruth; Brewer, Paul; Resner, Kate; Mace, Tatiana; Tarhan, Tanil; Zellweger, Christophe; Mohn, Joachim

    2017-04-01

    Climate change, due to increased man-made emissions of greenhouse gases, poses one of the greatest risks to society worldwide. High-impact greenhouse gases (CO2, CH4 and N2O) and indirect drivers for global warming (e.g. CO) are measured by the global monitoring stations for greenhouse gases, operated and organized by the World Meteorological Organization (WMO). Reference gases for the calibration of analyzers have to meet very challenging low level of measurement uncertainty to comply with the Data Quality Objectives (DQOs) set by the WMO. Within the framework of the European Metrology Research Programme (EMRP), a project to improve the metrology for high-impact greenhouse gases was granted (HIGHGAS, June 2014-May 2017). As a result of the HIGHGAS project, primary reference gas mixtures in cylinders for ambient levels of CO2, CH4, N2O and CO in air have been prepared with unprecedented low uncertainties, typically 3-10 times lower than usually previously achieved by the NMIs. To accomplish these low uncertainties in the reference standards, a number of preparation and analysis steps have been studied and improved. The purity analysis of the parent gases had to be performed with lower detection limits than previously achievable. E.g., to achieve an uncertainty of 2•10-9 mol/mol (absolute) on the amount fraction for N2O, the detection limit for the N2O analysis in the parent gases has to be in the sub nmol/mol domain. Results of an OPO-CRDS analyzer set-up in the 5µm wavelength domain, with a 200•10-12 mol/mol detection limit for N2O, will be presented. The adsorption effects of greenhouse gas components at cylinder surfaces are critical, and have been studied for different cylinder passivation techniques. Results of a two-year stability study will be presented. The fit-for-purpose of the reference materials was studied for possible variation on isotopic composition between the reference material and the sample. Measurement results for a suit of CO2 in air

  12. A method to incorporate uncertainty in the classification of remote sensing images

    OpenAIRE

    Gonçalves, Luísa M. S.; Fonte, Cidália C.; Júlio, Eduardo N. B. S.; Caetano, Mario

    2009-01-01

    The aim of this paper is to investigate if the incorporation of the uncertainty associated with the classification of surface elements into the classification of landscape units (LUs) increases the results accuracy. To this end, a hybrid classification method is developed, including uncertainty information in the classification of very high spatial resolution multi-spectral satellite images, to obtain a map of LUs. The developed classification methodology includes the following...

  13. Accuracy optimization of high-speed AFM measurements using Design of Experiments

    DEFF Research Database (Denmark)

    Tosello, Guido; Marinello, F.; Hansen, Hans Nørgaard

    2010-01-01

    Atomic Force Microscopy (AFM) is being increasingly employed in industrial micro/nano manufacturing applications and integrated into production lines. In order to achieve reliable process and product control at high measuring speed, instrument optimization is needed. Quantitative AFM measurement...... results are influenced by a number of scan settings parameters, defining topography sampling and measurement time: resolution (number of profiles and points per profile), scan range and direction, scanning force and speed. Such parameters are influencing lateral and vertical accuracy and, eventually......, the estimated dimensions of measured features. The definition of scan settings is based on a comprehensive optimization that targets maximization of information from collected data and minimization of measurement uncertainty and scan time. The Design of Experiments (DOE) technique is proposed and applied...

  14. Recent high-accuracy measurements of the 1S0 neutron-neutron scattering length

    International Nuclear Information System (INIS)

    Howell, C.R.; Chen, Q.; Gonzalez Trotter, D.E.; Salinas, F.; Crowell, A.S.; Roper, C.D.; Tornow, W.; Walter, R.L.; Carman, T.S.; Hussein, A.; Gibbs, W.R.; Gibson, B.F.; Morris, C.; Obst, A.; Sterbenz, S.; Whitton, M.; Mertens, G.; Moore, C.F.; Whiteley, C.R.; Pasyuk, E.; Slaus, I.; Tang, H.; Zhou, Z.; Gloeckle, W.; Witala, H.

    2000-01-01

    This paper reports two recent high-accuracy determinations of the 1 S 0 neutron-neutron scattering length, a nn . One was done at the Los Alamos National Laboratory using the π - d capture reaction to produce two neutrons with low relative momentum. The neutron-deuteron (nd) breakup reaction was used in other measurement, which was conducted at the Triangle Universities Nuclear Laboratory. The results from the two determinations were consistent with each other and with previous values obtained using the π - d capture reaction. The value obtained from the nd breakup measurements is a nn = -18.7 ± 0.1 (statistical) ± 0.6 (systematic) fm, and the value from the π - d capture experiment is a nn = -18.50 ± 0.05 ± 0.53 fm. The recommended value is a nn = -18.5 ± 0.3 fm. (author)

  15. High accuracy amplitude and phase measurements based on a double heterodyne architecture

    International Nuclear Information System (INIS)

    Zhao Danyang; Wang Guangwei; Pan Weimin

    2015-01-01

    In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)

  16. High-accuracy biodistribution analysis of adeno-associated virus variants by double barcode sequencing.

    Science.gov (United States)

    Marsic, Damien; Méndez-Gómez, Héctor R; Zolotukhin, Sergei

    2015-01-01

    Biodistribution analysis is a key step in the evaluation of adeno-associated virus (AAV) capsid variants, whether natural isolates or produced by rational design or directed evolution. Indeed, when screening candidate vectors, accurate knowledge about which tissues are infected and how efficiently is essential. We describe the design, validation, and application of a new vector, pTR-UF50-BC, encoding a bioluminescent protein, a fluorescent protein and a DNA barcode, which can be used to visualize localization of transduction at the organism, organ, tissue, or cellular levels. In addition, by linking capsid variants to different barcoded versions of the vector and amplifying the barcode region from various tissue samples using barcoded primers, biodistribution of viral genomes can be analyzed with high accuracy and efficiency.

  17. Accuracy and high-speed technique for autoprocessing of Young's fringes

    Science.gov (United States)

    Chen, Wenyi; Tan, Yushan

    1991-12-01

    In this paper, an accurate and high-speed method for auto-processing of Young's fringes is proposed. A group of 1-D sampled intensity values along three or more different directions are taken from Young's fringes, and the fringe spacings of each direction are obtained by 1-D FFT respectively. Two directions that have smaller fringe spacing are selected from all directions. The accurate fringe spacings along these two directions are obtained by using orthogonal coherent phase detection technique (OCPD). The actual spacing and angle of Young's fringes, therefore, can be calculated. In this paper, the principle of OCPD is introduced in detail. The accuracy of the method is evaluated theoretically and experimentally.

  18. DEVELOPMENT OF COMPLEXITY, ACCURACY, AND FLUENCY IN HIGH SCHOOL STUDENTS’ WRITTEN FOREIGN LANGUAGE PRODUCTION

    Directory of Open Access Journals (Sweden)

    Bouchaib Benzehaf

    2016-11-01

    Full Text Available The present study aims to longitudinally depict the dynamic and interactive development of Complexity, Accuracy, and Fluency (CAF in multilingual learners’ L2 and L3 writing. The data sources include free writing tasks written in L2 French and L3 English by 45 high school participants over a period of four semesters. CAF dimensions are measured using a variation of Hunt’s T-units (1964. Analysis of the quantitative data obtained suggests that CAF measures develop differently for learners’ L2 French and L3 English. They increase more persistently in L3 English, and they display the characteristics of a dynamic, non-linear system characterized by ups and downs particularly in L2 French. In light of the results, we suggest more and denser longitudinal data to explore the nature of interactions between these dimensions in foreign language development, particularly at the individual level.

  19. Accuracy of thick-walled hollows during piercing on three-high mill

    International Nuclear Information System (INIS)

    Potapov, I.N.; Romantsev, B.A.; Shamanaev, V.I.; Popov, V.A.; Kharitonov, E.A.

    1975-01-01

    The results of investigations are presented concerning the accuracy of geometrical dimensions of thick-walled sleeves produced by piercing on a 100-ton trio screw rolling mill MISiS with three schemes of fixing and centering the rod. The use of a spherical thrust journal for the rod and of a long centering bushing makes it possible to diminish the non-uniformity of the wall thickness of the sleeves by 30-50%. It is established that thick-walled sleeves with accurate geometrical dimensions (nonuniformity of the wall thickness being less than 10%) can be produced if the system sleeve - mandrel - rod is highly rigid and the rod has a two- or three-fold stability margin over the length equal to that of the sleeve being pierced. The process of piercing is expedient to be carried out with increased angles of feed (14-16 deg). Blanks have been made from steel 12Kh1MF

  20. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  1. Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems

    Science.gov (United States)

    Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.

    2015-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  2. Innovative Technique for High-Accuracy Remote Monitoring of Surface Water

    Science.gov (United States)

    Gisler, A.; Barton-Grimley, R. A.; Thayer, J. P.; Crowley, G.

    2016-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems and agricultural waterways. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for monitoring water resources on fast timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  3. High-accuracy continuous airborne measurements of greenhouse gases (CO2 and CH4) during BARCA

    Science.gov (United States)

    Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.

    2009-12-01

    High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR) analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.

  4. High-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects.

    Science.gov (United States)

    Zeng, Zhaoli; Qu, Xueming; Tan, Yidong; Tan, Runtao; Zhang, Shulian

    2015-06-29

    A simple and high-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects is presented. The single high-order feedback effect is realized when dual-frequency laser reflects numerous times in a Fabry-Perot cavity and then goes back to the laser resonator along the same route. In this case, two orthogonally polarized feedback fringes with nanoscale resolution are obtained. This self-mixing interferometer has the advantages of higher sensitivity to weak signal than that of conventional interferometer. In addition, two orthogonally polarized fringes are useful for discriminating the moving direction of measured object. The experiment of measuring 2.5nm step is conducted, which shows a great potential in nanometrology.

  5. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    Science.gov (United States)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  6. Object-based methods for individual tree identification and tree species classification from high-spatial resolution imagery

    Science.gov (United States)

    Wang, Le

    2003-10-01

    textures occurring due to branches and twigs. As a result from the inverse wavelet transform, the tree crown boundary is enhanced while the unwanted textures are suppressed. Based on the enhanced image, an improvement is achieved when applying the two-stage methods to a high resolution aerial photograph. To improve tree species classification, we develop a new method to choose the optimal scale parameter with the aid of Bhattacharya Distance (BD), a well-known index of class separability in traditional pixel-based classification. The optimal scale parameter is then fed in the process of a region-growing-based segmentation as a break-off value. Our object classification achieves a better accuracy in separating tree species when compared to the conventional Maximum Likelihood Classification (MLC). In summary, we develop two object-based methods for identifying individual trees and classifying tree species from high-spatial resolution imagery. Both methods achieve promising results and will promote integration of Remote Sensing and GIS in forest applications.

  7. Efficacy of the Kyoto Classification of Gastritis in Identifying Patients at High Risk for Gastric Cancer.

    Science.gov (United States)

    Sugimoto, Mitsushige; Ban, Hiromitsu; Ichikawa, Hitomi; Sahara, Shu; Otsuka, Taketo; Inatomi, Osamu; Bamba, Shigeki; Furuta, Takahisa; Andoh, Akira

    2017-01-01

    Objective The Kyoto gastritis classification categorizes the endoscopic characteristics of Helicobacter pylori (H. pylori) infection-associated gastritis and identifies patterns associated with a high risk of gastric cancer. We investigated its efficacy, comparing scores in patients with H. pylori-associated gastritis and with gastric cancer. Methods A total of 1,200 patients with H. pylori-positive gastritis alone (n=932), early-stage H. pylori-positive gastric cancer (n=189), and successfully treated H. pylori-negative cancer (n=79) were endoscopically graded according to the Kyoto gastritis classification for atrophy, intestinal metaplasia, fold hypertrophy, nodularity, and diffuse redness. Results The prevalence of O-II/O-III-type atrophy according to the Kimura-Takemoto classification in early-stage H. pylori-positive gastric cancer and successfully treated H. pylori-negative cancer groups was 45.1%, which was significantly higher than in subjects with gastritis alone (12.7%, pgastritis scores of atrophy and intestinal metaplasia in the H. pylori-positive cancer group were significantly higher than in subjects with gastritis alone (all pgastritis classification may thus be useful for detecting these patients.

  8. Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method

    Science.gov (United States)

    Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio

    Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV

  9. Application of Object Based Classification and High Resolution Satellite Imagery for Savanna Ecosystem Analysis

    Directory of Open Access Journals (Sweden)

    Jane Southworth

    2010-12-01

    Full Text Available Savanna ecosystems are an important component of dryland regions and yet are exceedingly difficult to study using satellite imagery. Savannas are composed are varying amounts of trees, shrubs and grasses and typically traditional classification schemes or vegetation indices cannot differentiate across class type. This research utilizes object based classification (OBC for a region in Namibia, using IKONOS imagery, to help differentiate tree canopies and therefore woodland savanna, from shrub or grasslands. The methodology involved the identification and isolation of tree canopies within the imagery and the creation of tree polygon layers had an overall accuracy of 84%. In addition, the results were scaled up to a corresponding Landsat image of the same region, and the OBC results compared to corresponding pixel values of NDVI. The results were not compelling, indicating once more the problems of these traditional image analysis techniques for savanna ecosystems. Overall, the use of the OBC holds great promise for this ecosystem and could be utilized more frequently in studies of vegetation structure.

  10. High-Accuracy Measurements of Total Column Water Vapor From the Orbiting Carbon Observatory-2

    Science.gov (United States)

    Nelson, Robert R.; Crisp, David; Ott, Lesley E.; O'Dell, Christopher W.

    2016-01-01

    Accurate knowledge of the distribution of water vapor in Earth's atmosphere is of critical importance to both weather and climate studies. Here we report on measurements of total column water vapor (TCWV) from hyperspectral observations of near-infrared reflected sunlight over land and ocean surfaces from the Orbiting Carbon Observatory-2 (OCO-2). These measurements are an ancillary product of the retrieval algorithm used to measure atmospheric carbon dioxide concentrations, with information coming from three highly resolved spectral bands. Comparisons to high-accuracy validation data, including ground-based GPS and microwave radiometer data, demonstrate that OCO-2 TCWV measurements have maximum root-mean-square deviations of 0.9-1.3mm. Our results indicate that OCO-2 is the first space-based sensor to accurately and precisely measure the two most important greenhouse gases, water vapor and carbon dioxide, at high spatial resolution [1.3 x 2.3 km(exp. 2)] and that OCO-2 TCWV measurements may be useful in improving numerical weather predictions and reanalysis products.

  11. A new device for liver cancer biomarker detection with high accuracy

    Directory of Open Access Journals (Sweden)

    Shuaipeng Wang

    2015-06-01

    Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.

  12. Design and Performance Evaluation of Real-time Endovascular Interventional Surgical Robotic System with High Accuracy.

    Science.gov (United States)

    Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan

    2018-05-15

    Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.

  13. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.

    Science.gov (United States)

    Qi, Jun; Liu, Guo-Ping

    2017-11-06

    This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.

  14. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    Science.gov (United States)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  15. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jun Qi

    2017-11-01

    Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.

  16. High accuracy electromagnetic field solvers for cylindrical waveguides and axisymmetric structures using the finite element method

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1993-12-01

    Some two-dimensional finite element electromagnetic field solvers are described and tested. For TE and TM modes in homogeneous cylindrical waveguides and monopole modes in homogeneous axisymmetric structures, the solvers find approximate solutions to a weak formulation of the wave equation. Second-order isoparametric lagrangian triangular elements represent the field. For multipole modes in axisymmetric structures, the solver finds approximate solutions to a weak form of the curl-curl formulation of Maxwell's equations. Second-order triangular edge elements represent the radial (ρ) and axial (z) components of the field, while a second-order lagrangian basis represents the azimuthal (φ) component of the field weighted by the radius ρ. A reduced set of basis functions is employed for elements touching the axis. With this basis the spurious modes of the curl-curl formulation have zero frequency, so spurious modes are easily distinguished from non-static physical modes. Tests on an annular ring, a pillbox and a sphere indicate the solutions converge rapidly as the mesh is refined. Computed eigenvalues with relative errors of less than a few parts per million are obtained. Boundary conditions for symmetric, periodic and symmetric-periodic structures are discussed and included in the field solver. Boundary conditions for structures with inversion symmetry are also discussed. Special corner elements are described and employed to improve the accuracy of cylindrical waveguide and monopole modes with singular fields at sharp corners. The field solver is applied to three problems: (1) cross-field amplifier slow-wave circuits, (2) a detuned disk-loaded waveguide linear accelerator structure and (3) a 90 degrees overmoded waveguide bend. The detuned accelerator structure is a critical application of this high accuracy field solver. To maintain low long-range wakefields, tight design and manufacturing tolerances are required

  17. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    Science.gov (United States)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  18. A High Throughput Ambient Mass Spectrometric Approach to Species Identification and Classification from Chemical Fingerprint Signatures

    OpenAIRE

    Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.

    2015-01-01

    A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes a...

  19. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  20. Accuracy of Administrative Codes for Distinguishing Positive Pressure Ventilation from High-Flow Nasal Cannula.

    Science.gov (United States)

    Good, Ryan J; Leroue, Matthew K; Czaja, Angela S

    2018-06-07

    Noninvasive positive pressure ventilation (NIPPV) is increasingly used in critically ill pediatric patients, despite limited data on safety and efficacy. Administrative data may be a good resource for observational studies. Therefore, we sought to assess the performance of the International Classification of Diseases, Ninth Revision procedure code for NIPPV. Patients admitted to the PICU requiring NIPPV or heated high-flow nasal cannula (HHFNC) over the 11-month study period were identified from the Virtual PICU System database. The gold standard was manual review of the electronic health record to verify the use of NIPPV or HHFNC among the cohort. The presence or absence of a NIPPV procedure code was determined by using administrative data. Test characteristics with 95% confidence intervals (CIs) were generated, comparing administrative data with the gold standard. Among the cohort ( n = 562), the majority were younger than 5 years, and the most common primary diagnosis was bronchiolitis. Most (82%) required NIPPV, whereas 18% required only HHFNC. The NIPPV code had a sensitivity of 91.1% (95% CI: 88.2%-93.6%) and a specificity of 57.6% (95% CI: 47.2%-67.5%), with a positive likelihood ratio of 2.15 (95% CI: 1.70-2.71) and negative likelihood ratio of 0.15 (95% CI: 0.11-0.22). Among our critically ill pediatric cohort, NIPPV procedure codes had high sensitivity but only moderate specificity. On the basis of our study results, there is a risk of misclassification, specifically failure to identify children who require NIPPV, when using administrative data to study the use of NIPPV in this population. Copyright © 2018 by the American Academy of Pediatrics.

  1. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  2. High-resolution CT of nontuberculous mycobacterium infection in adult CF patients: diagnostic accuracy

    International Nuclear Information System (INIS)

    McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D.; DeJong, Pim A.; Loeve, Martine; Tiddens, Harm A.W.M.; McKone, Edward; Gallagher, Charles G.

    2012-01-01

    To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R 2 = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)

  3. High-resolution CT of nontuberculous mycobacterium infection in adult CF patients: diagnostic accuracy

    Energy Technology Data Exchange (ETDEWEB)

    McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D. [St. Vincent' s University Hospital, Department of Radiology, Dublin (Ireland); DeJong, Pim A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Loeve, Martine; Tiddens, Harm A.W.M. [Erasmus MC-Sophia Children' s Hospital, Department of Radiology, Department of Pediatric Pulmonology and Allergology, Rotterdam (Netherlands); McKone, Edward; Gallagher, Charles G. [St. Vincent' s University Hospital, Department of Respiratory Medicine and National Referral Centre for Adult Cystic Fibrosis, Dublin (Ireland)

    2012-12-15

    To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R{sup 2} = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)

  4. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Directory of Open Access Journals (Sweden)

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  5. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  6. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level radioactive waste (HLW) as: (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel....that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission....determines....requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph (B). The approach also results in definitions of other waste classes, i.e., transuranic (TRU) and low-level waste (LLW). A basic waste classification scheme results from the quantitative definitions

  7. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  8. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  9. Creating high-resolution time series land-cover classifications in rapidly changing forested areas with BULC-U in Google Earth Engine

    Science.gov (United States)

    Cardille, J. A.; Lee, J.

    2017-12-01

    With the opening of the Landsat archive, there is a dramatically increased potential for creating high-quality time series of land use/land-cover (LULC) classifications derived from remote sensing. Although LULC time series are appealing, their creation is typically challenging in two fundamental ways. First, there is a need to create maximally correct LULC maps for consideration at each time step; and second, there is a need to have the elements of the time series be consistent with each other, without pixels that flip improbably between covers due only to unavoidable, stray classification errors. We have developed the Bayesian Updating of Land Cover - Unsupervised (BULC-U) algorithm to address these challenges simultaneously, and introduce and apply it here for two related but distinct purposes. First, with minimal human intervention, we produced an internally consistent, high-accuracy LULC time series in rapidly changing Mato Grosso, Brazil for a time interval (1986-2000) in which cropland area more than doubled. The spatial and temporal resolution of the 59 LULC snapshots allows users to witness the establishment of towns and farms at the expense of forest. The new time series could be used by policy-makers and analysts to unravel important considerations for conservation and management, including the timing and location of past development, the rate and nature of changes in forest connectivity, the connection with road infrastructure, and more. The second application of BULC-U is to sharpen the well-known GlobCover 2009 classification from 300m to 30m, while improving accuracy measures for every class. The greatly improved resolution and accuracy permits a better representation of the true LULC proportions, the use of this map in models, and quantification of the potential impacts of changes. Given that there may easily be thousands and potentially millions of images available to harvest for an LULC time series, it is imperative to build useful algorithms

  10. High-accuracy detection of early Parkinson's Disease using multiple characteristics of finger movement while typing.

    Directory of Open Access Journals (Sweden)

    Warwick R Adams

    Full Text Available Parkinson's Disease (PD is a progressive neurodegenerative movement disease affecting over 6 million people worldwide. Loss of dopamine-producing neurons results in a range of both motor and non-motor symptoms, however there is currently no definitive test for PD by non-specialist clinicians, especially in the early disease stages where the symptoms may be subtle and poorly characterised. This results in a high misdiagnosis rate (up to 25% by non-specialists and people can have the disease for many years before diagnosis. There is a need for a more accurate, objective means of early detection, ideally one which can be used by individuals in their home setting. In this investigation, keystroke timing information from 103 subjects (comprising 32 with mild PD severity and the remainder non-PD controls was captured as they typed on a computer keyboard over an extended period and showed that PD affects various characteristics of hand and finger movement and that these can be detected. A novel methodology was used to classify the subjects' disease status, by utilising a combination of many keystroke features which were analysed by an ensemble of machine learning classification models. When applied to two separate participant groups, this approach was able to successfully discriminate between early-PD subjects and controls with 96% sensitivity, 97% specificity and an AUC of 0.98. The technique does not require any specialised equipment or medical supervision, and does not rely on the experience and skill of the practitioner. Regarding more general application, it currently does not incorporate a second cardinal disease symptom, so may not differentiate PD from similar movement-related disorders.

  11. Diagnosing multibacillary leprosy: A comparative evaluation of diagnostic accuracy of slit-skin smear, bacterial index of granuloma and WHO operational classification

    Directory of Open Access Journals (Sweden)

    Bhushan Premanshu

    2008-01-01

    Full Text Available Background: In view of the relatively poor performance of skin smears WHO adopted a purely clinical operational classification, however the poor specificity of operational classification leads to overdiagnosis and unwarranted overtreatment while the poor sensitivity leads to underdiagnosis of multibacillary (MB cases with inadequate treatment. Bacilli are more frequently and abundantly demonstrated in tissue sections. Aims and Methods: We compared WHO classification, slit-skin smears (SSS and demonstration of bacilli in biopsies (bacterial index of granuloma or BIG with regards to their efficacy in correctly identifying multibacillary cases. The tests were done on 141 patients and were evaluated for their ability to diagnose true MB leprosy using detailed statistical analysis. Results: A total of 76 patients were truly MB with either positive smears, BIG positivity or with a typical histology of BB, BL or LL. Amongst these 76 true-MB patients, WHO operational classification correctly identified multibacillary status in 56 (73.68%, and SSS in 43 (56.58%, while BIG correctly identified 65 (85.53% true-MB cases. Conclusion: BIG was most sensitive and effective of the three methods especially in paucilesional patients. We suggest adding estimation of bacterial index of granuloma in the diagnostic workup of paucilesional patients.

  12. Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets

    International Nuclear Information System (INIS)

    Fathizadeh, M.

    1991-01-01

    The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degrees apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly at about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ±2x10 -4 and tracking error of ±5x10 -4 was needed. 3 refs., 5 figs

  13. High accuracy line positions of the ν 1 fundamental band of 14 N 2 16 O

    KAUST Repository

    Alsaif, Bidoor

    2018-03-08

    The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240 – 1310 cm−1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10−6 cm−1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.

  14. Coronary CT angiography using prospective ECG triggering. High diagnostic accuracy with low radiation dose

    International Nuclear Information System (INIS)

    Arnoldi, E.; Ramos-Duran, L.; Abro, J.A.; Costello, P.; Zwerner, P.L.; Schoepf, U.J.; Nikolaou, K.; Reiser, M.F.

    2010-01-01

    The purpose of this study was to evaluate the diagnostic performance of coronary CT angiography (coronary CTA) using prospective ECG triggering (PT) for the detection of significant coronary artery stenosis compared to invasive coronary angiography (ICA). A total of 20 patients underwent coronary CTA with PT using a 128-slice CT scanner (Definition trademark AS+, Siemens) and ICA. All coronary CTA studies were evaluated for significant coronary artery stenoses (≥50% luminal narrowing) by 2 observers in consensus using the AHA-15-segment model. Findings in CTA were compared to those in ICA. Coronary CTA using PT had 88% sensitivity in comparison to 100% with ICA, 95% to 88% specificity, 80% to 92% positive predictive value and 97% to 100% negative predictive value for diagnosing significant coronary artery stenosis on per segment per patient analysis, respectively. Mean effective radiation dose-equivalent of CTA was 2.6±1 mSv. Coronary CTA using PT enables non-invasive diagnosis of significant coronary artery stenosis with high diagnostic accuracy in comparison to ICA and is associated with comparably low radiation exposure. (orig.) [de

  15. High accuracy line positions of the ν1 fundamental band of 14N216O

    Science.gov (United States)

    AlSaif, Bidoor; Lamperti, Marco; Gatti, Davide; Laporta, Paolo; Fermann, Martin; Farooq, Aamir; Lyulin, Oleg; Campargue, Alain; Marangoni, Marco

    2018-05-01

    The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240-1310 cm-1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10-6 cm-1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.

  16. On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.

    Science.gov (United States)

    Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis

    2016-07-01

    To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets

    International Nuclear Information System (INIS)

    Fathizadeh, M.

    1991-01-01

    The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degree apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly to about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ± 2 x 10 -4 and tracking error of ± 5 x 10 -4 was needed

  18. Quantitative accuracy of serotonergic neurotransmission imaging with high-resolution 123I SPECT

    International Nuclear Information System (INIS)

    Kuikka, J.T.

    2004-01-01

    Aim: Serotonin transporter (SERT) imaging can be used to study the role of regional abnormalities of neurotransmitter release in various mental disorders and to study the mechanism of action of therapeutic drugs or drugs' abuse. We examine the quantitative accuracy and reproducibility that can be achieved with high-resolution SPECT of serotonergic neurotransmission. Method: Binding potential (BP) of 123 I labeled tracer specific for midbrain SERT was assessed in 20 healthy persons. The effects of scatter, attenuation, partial volume, misregistration and statistical noise were estimated using phantom and human studies. Results: Without any correction, BP was underestimated by 73%. The partial volume error was the major component in this underestimation whereas the most critical error for the reproducibility was misplacement of region of interest (ROI). Conclusion: The proper ROI registration, the use of the multiple head gamma camera with transmission based scatter correction introduce more relevant results. However, due to the small dimensions of the midbrain SERT structures and poor spatial resolution of SPECT, the improvement without the partial volume correction is not great enough to restore the estimate of BP to that of the true one. (orig.) [de

  19. High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking

    Science.gov (United States)

    Zhai, Chengxing; Shao, Michael; Saini, Navtej; Sandhu, Jagmit; Werne, Thomas; Choi, Philip; Ely, Todd A.; Jacobs, Chirstopher S.; Lazio, Joseph; Martin-Mur, Tomas J.; Owen, William M.; Preston, Robert; Turyshev, Slava; Michell, Adam; Nazli, Kutay; Cui, Isaac; Monchama, Rachel

    2018-01-01

    Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package that is part of the baseline payload for the planned Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.

  20. Optimal design of a high accuracy photoelectric auto-collimator based on position sensitive detector

    Science.gov (United States)

    Yan, Pei-pei; Yang, Yong-qing; She, Wen-ji; Liu, Kai; Jiang, Kai; Duan, Jing; Shan, Qiusha

    2018-02-01

    A kind of high accuracy Photo-electric auto-collimator based on PSD was designed. The integral structure composed of light source, optical lens group, Position Sensitive Detector (PSD) sensor, and its hardware and software processing system constituted. Telephoto objective optical type is chosen during the designing process, which effectively reduces the length, weight and volume of the optical system, as well as develops simulation-based design and analysis of the auto-collimator optical system. The technical indicators of auto-collimator presented by this paper are: measuring resolution less than 0.05″; a field of view is 2ω=0.4° × 0.4° measuring range is +/-5' error of whole range measurement is less than 0.2″. Measuring distance is 10m, which are applicable to minor-angle precise measuring environment. Aberration analysis indicates that the MTF close to the diffraction limit, the spot in the spot diagram is much smaller than the Airy disk. The total length of the telephoto lens is only 450mm by the design of the optical machine structure optimization. The autocollimator's dimension get compact obviously under the condition of the image quality is guaranteed.

  1. Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing

    International Nuclear Information System (INIS)

    Bailey, David

    2005-01-01

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the

  2. Performance of the majority voting rule in solving the density classification problem in high dimensions

    International Nuclear Information System (INIS)

    Gomez Soto, Jose Manuel; Fuks, Henryk

    2011-01-01

    The density classification problem (DCP) is one of the most widely studied problems in the theory of cellular automata. After it was shown that the DCP cannot be solved perfectly, the research in this area has been focused on finding better rules that could solve the DCP approximately. In this paper, we argue that the majority voting rule in high dimensions can achieve high performance in solving the DCP, and that its performance increases with dimension. We support this conjecture with arguments based on the mean-field approximation and direct computer simulations. (paper)

  3. Performance of the majority voting rule in solving the density classification problem in high dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Soto, Jose Manuel [Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Calzada Solidaridad entronque Paseo a la Bufa, Zacatecas, Zac. (Mexico); Fuks, Henryk, E-mail: jmgomezgoo@gmail.com, E-mail: hfuks@brocku.ca [Department of Mathematics, Brock University, St. Catharines, ON (Canada)

    2011-11-04

    The density classification problem (DCP) is one of the most widely studied problems in the theory of cellular automata. After it was shown that the DCP cannot be solved perfectly, the research in this area has been focused on finding better rules that could solve the DCP approximately. In this paper, we argue that the majority voting rule in high dimensions can achieve high performance in solving the DCP, and that its performance increases with dimension. We support this conjecture with arguments based on the mean-field approximation and direct computer simulations. (paper)

  4. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack

    2013-09-18

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  5. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.

    2013-01-01

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  6. Factors Determining the Inter-observer Variability and Diagnostic Accuracy of High-resolution Manometry for Esophageal Motility Disorders.

    Science.gov (United States)

    Kim, Ji Hyun; Kim, Sung Eun; Cho, Yu Kyung; Lim, Chul-Hyun; Park, Moo In; Hwang, Jin Won; Jang, Jae-Sik; Oh, Minkyung

    2018-01-30

    Although high-resolution manometry (HRM) has the advantage of visual intuitiveness, its diagnostic validity remains under debate. The aim of this study was to evaluate the diagnostic accuracy of HRM for esophageal motility disorders. Six staff members and 8 trainees were recruited for the study. In total, 40 patients enrolled in manometry studies at 3 institutes were selected. Captured images of 10 representative swallows and a single swallow in analyzing mode in both high-resolution pressure topography (HRPT) and conventional line tracing formats were provided with calculated metrics. Assessments of esophageal motility disorders showed fair agreement for HRPT and moderate agreement for conventional line tracing (κ = 0.40 and 0.58, respectively). With the HRPT format, the k value was higher in category A (esophagogastric junction [EGJ] relaxation abnormality) than in categories B (major body peristalsis abnormalities with intact EGJ relaxation) and C (minor body peristalsis abnormalities or normal body peristalsis with intact EGJ relaxation). The overall exact diagnostic accuracy for the HRPT format was 58.8% and rater's position was an independent factor for exact diagnostic accuracy. The diagnostic accuracy for major disorders was 63.4% with the HRPT format. The frequency of major discrepancies was higher for category B disorders than for category A disorders (38.4% vs 15.4%; P < 0.001). The interpreter's experience significantly affected the exact diagnostic accuracy of HRM for esophageal motility disorders. The diagnostic accuracy for major disorders was higher for achalasia than distal esophageal spasm and jackhammer esophagus.

  7. Maximizing the Diversity of Ensemble Random Forests for Tree Genera Classification Using High Density LiDAR Data

    Directory of Open Access Journals (Sweden)

    Connie Ko

    2016-08-01

    Full Text Available Recent research into improving the effectiveness of forest inventory management using airborne LiDAR data has focused on developing advanced theories in data analytics. Furthermore, supervised learning as a predictive model for classifying tree genera (and species, where possible has been gaining popularity in order to minimize this labor-intensive task. However, bottlenecks remain that hinder the immediate adoption of supervised learning methods. With supervised classification, training samples are required for learning the parameters that govern the performance of a classifier, yet the selection of training data is often subjective and the quality of such samples is critically important. For LiDAR scanning in forest environments, the quantification of data quality is somewhat abstract, normally referring to some metric related to the completeness of individual tree crowns; however, this is not an issue that has received much attention in the literature. Intuitively the choice of training samples having varying quality will affect classification accuracy. In this paper a Diversity Index (DI is proposed that characterizes the diversity of data quality (Qi among selected training samples required for constructing a classification model of tree genera. The training sample is diversified in terms of data quality as opposed to the number of samples per class. The diversified training sample allows the classifier to better learn the positive and negative instances and; therefore; has a higher classification accuracy in discriminating the “unknown” class samples from the “known” samples. Our algorithm is implemented within the Random Forests base classifiers with six derived geometric features from LiDAR data. The training sample contains three tree genera (pine; poplar; and maple and the validation samples contains four labels (pine; poplar; maple; and “unknown”. Classification accuracy improved from 72.8%; when training samples were

  8. DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2012-07-01

    Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.

  9. Sparse Bayesian classification and feature selection for biological expression data with high correlations.

    Directory of Open Access Journals (Sweden)

    Xian Yang

    Full Text Available Classification models built on biological expression data are increasingly used to predict distinct disease subtypes. Selected features that separate sample groups can be the candidates of biomarkers, helping us to discover biological functions/pathways. However, three challenges are associated with building a robust classification and feature selection model: 1 the number of significant biomarkers is much smaller than that of measured features for which the search will be exhaustive; 2 current biological expression data are big in both sample size and feature size which will worsen the scalability of any search algorithms; and 3 expression profiles of certain features are typically highly correlated which may prevent to distinguish the predominant features. Unfortunately, most of the existing algorithms are partially addressing part of these challenges but not as a whole. In this paper, we propose a unified framework to address the above challenges. The classification and feature selection problem is first formulated as a nonconvex optimisation problem. Then the problem is relaxed and solved iteratively by a sequence of convex optimisation procedures which can be distributed computed and therefore allows the efficient implementation on advanced infrastructures. To illustrate the competence of our method over others, we first analyse a randomly generated simulation dataset under various conditions. We then analyse a real gene expression dataset on embryonal tumour. Further downstream analysis, such as functional annotation and pathway analysis, are performed on the selected features which elucidate several biological findings.

  10. Towards Building Reliable, High-Accuracy Solar Irradiance Database For Arid Climates

    Science.gov (United States)

    Munawwar, S.; Ghedira, H.

    2012-12-01

    Middle East's growing interest in renewable energy has led to increased activity in solar technology development with the recent commissioning of several utility-scale solar power projects and many other commercial installations across the Arabian Peninsula. The region, lying in a virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2, is also one of the most promising candidates for solar energy deployment. However, it is not the availability of resource, but its characterization and reasonably accurate assessment that determines the application potential. Solar irradiance, magnitude and variability inclusive, is the key input in assessing the economic feasibility of a solar system. The accuracy of such data is of critical importance for realistic on-site performance estimates. This contribution aims to identify the key stages in developing a robust solar database for desert climate by focusing on the challenges that an arid environment presents to parameterization of solar irradiance attenuating factors. Adjustments are proposed based on the currently available resource assessment tools to produce high quality data for assessing bankability. Establishing and maintaining ground solar irradiance measurements is an expensive affair and fairly limited in time (recently operational) and space (fewer sites) in the Gulf region. Developers within solar technology industry, therefore, rely on solar radiation models and satellite-derived data for prompt resource assessment needs. It is imperative that such estimation tools are as accurate as possible. While purely empirical models have been widely researched and validated in the Arabian Peninsula's solar modeling history, they are known to be intrinsically site-specific. A primal step to modeling is an in-depth understanding of the region's climate, identifying the key players attenuating radiation and their appropriate characterization to determine solar irradiance. Physical approach

  11. The effects of high-frequency oscillations in hippocampal electrical activities on the classification of epileptiform events using artificial neural networks

    Science.gov (United States)

    Chiu, Alan W. L.; Jahromi, Shokrollah S.; Khosravani, Houman; Carlen, Peter L.; Bardakjian, Berj L.

    2006-03-01

    The existence of hippocampal high-frequency electrical activities (greater than 100 Hz) during the progression of seizure episodes in both human and animal experimental models of epilepsy has been well documented (Bragin A, Engel J, Wilson C L, Fried I and Buzsáki G 1999 Hippocampus 9 137-42 Khosravani H, Pinnegar C R, Mitchell J R, Bardakjian B L, Federico P and Carlen P L 2005 Epilepsia 46 1-10). However, this information has not been studied between successive seizure episodes or utilized in the application of seizure classification. In this study, we examine the dynamical changes of an in vitro low Mg2+ rat hippocampal slice model of epilepsy at different frequency bands using wavelet transforms and artificial neural networks. By dividing the time-frequency spectrum of each seizure-like event (SLE) into frequency bins, we can analyze their burst-to-burst variations within individual SLEs as well as between successive SLE episodes. Wavelet energy and wavelet entropy are estimated for intracellular and extracellular electrical recordings using sufficiently high sampling rates (10 kHz). We demonstrate that the activities of high-frequency oscillations in the 100-400 Hz range increase as the slice approaches SLE onsets and in later episodes of SLEs. Utilizing the time-dependent relationship between different frequency bands, we can achieve frequency-dependent state classification. We demonstrate that activities in the frequency range 100-400 Hz are critical for the accurate classification of the different states of electrographic seizure-like episodes (containing interictal, preictal and ictal states) in brain slices undergoing recurrent spontaneous SLEs. While preictal activities can be classified with an average accuracy of 77.4 ± 6.7% utilizing the frequency spectrum in the range 0-400 Hz, we can also achieve a similar level of accuracy by using a nonlinear relationship between 100-400 Hz and <4 Hz frequency bands only.

  12. The research of digital circuit system for high accuracy CCD of portable Raman spectrometer

    Science.gov (United States)

    Yin, Yu; Cui, Yongsheng; Zhang, Xiuda; Yan, Huimin

    2013-08-01

    The Raman spectrum technology is widely used for it can identify various types of molecular structure and material. The portable Raman spectrometer has become a hot direction of the spectrometer development nowadays for its convenience in handheld operation and real-time detection which is superior to traditional Raman spectrometer with heavy weight and bulky size. But there is still a gap for its measurement sensitivity between portable and traditional devices. However, portable Raman Spectrometer with Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) technology can enhance the Raman signal significantly by several orders of magnitude, giving consideration in both measurement sensitivity and mobility. This paper proposed a design and implementation of driver and digital circuit for high accuracy CCD sensor, which is core part of portable spectrometer. The main target of the whole design is to reduce the dark current generation rate and increase signal sensitivity during the long integration time, and in the weak signal environment. In this case, we use back-thinned CCD image sensor from Hamamatsu Corporation with high sensitivity, low noise and large dynamic range. In order to maximize this CCD sensor's performance and minimize the whole size of the device simultaneously to achieve the project indicators, we delicately designed a peripheral circuit for the CCD sensor. The design is mainly composed with multi-voltage circuit, sequential generation circuit, driving circuit and A/D transition parts. As the most important power supply circuit, the multi-voltage circuits with 12 independent voltages are designed with reference power supply IC and set to specified voltage value by the amplifier making up the low-pass filter, which allows the user to obtain a highly stable and accurate voltage with low noise. What's more, to make our design easy to debug, CPLD is selected to generate sequential signal. The A/D converter chip consists of a correlated

  13. In-depth, high-accuracy proteomics of sea urchin tooth organic matrix

    Directory of Open Access Journals (Sweden)

    Mann Matthias

    2008-12-01

    Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly

  14. Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.

    Directory of Open Access Journals (Sweden)

    Christopher Y Park

    Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics

  15. High accuracy subwavelength distance measurements: A variable-angle standing-wave total-internal-reflection optical microscope

    International Nuclear Information System (INIS)

    Haynie, A.; Min, T.-J.; Luan, L.; Mu, W.; Ketterson, J. B.

    2009-01-01

    We describe an extension of the total-internal-reflection microscopy technique that permits direct in-plane distance measurements with high accuracy (<10 nm) over a wide range of separations. This high position accuracy arises from the creation of a standing evanescent wave and the ability to sweep the nodal positions (intensity minima of the standing wave) in a controlled manner via both the incident angle and the relative phase of the incoming laser beams. Some control over the vertical resolution is available through the ability to scan the incoming angle and with it the evanescent penetration depth.

  16. CLASSIFICATION OF THE MGR DEFENSE HIGH-LEVEL WASTE DISPOSAL CONTAINER SYSTEM

    International Nuclear Information System (INIS)

    J.A. Ziegler

    1999-01-01

    The purpose of this analysis is to document the Quality Assurance (QA) classification of the Monitored Geologic Repository (MGR) defense high-level waste disposal container system structures, systems and components (SSCs) performed by the MGR Safety Assurance Department. This analysis also provides the basis for revision of YMP/90-55Q, Q-List (YMP 1998). The Q-List identifies those MGR SSCs subject to the requirements of DOE/RW-0333PY ''Quality Assurance Requirements and Description'' (QARD) (DOE 1998)

  17. High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning

    International Nuclear Information System (INIS)

    Jeong, Hae Sun

    2010-02-01

    Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm 2 in muscle and bone, and less than 3 x 3 cm 2 in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm 2 ), 10% to 1% (0.7 x 0.7 cm 2 ), and 42% to 7% (0.5 x 0

  18. High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hae Sun

    2010-02-15

    Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm{sup 2} in muscle and bone, and less than 3 x 3 cm{sup 2} in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm{sup 2}), 10% to 1% (0.7 x 0.7 cm{sup 2}), and 42

  19. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level (radioactive) waste (HLW) as (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel...that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission...determines...requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph B. The approach also results in definitions of other wastes classes, i.e., transuranic (TRU) and low-level waste (LLW). The basic waste classification scheme that results from the quantitative definitions of highly radioactive and requires permanent isolation is depicted. The concentrations of radionuclides that correspond to these two boundaries, and that may be used to classify radioactive wastes, are given

  20. Automatic Classification of High Resolution Satellite Imagery - a Case Study for Urban Areas in the Kingdom of Saudi Arabia

    Science.gov (United States)

    Maas, A.; Alrajhi, M.; Alobeid, A.; Heipke, C.

    2017-05-01

    Updating topographic geospatial databases is often performed based on current remotely sensed images. To automatically extract the object information (labels) from the images, supervised classifiers are being employed. Decisions to be taken in this process concern the definition of the classes which should be recognised, the features to describe each class and the training data necessary in the learning part of classification. With a view to large scale topographic databases for fast developing urban areas in the Kingdom of Saudi Arabia we conducted a case study, which investigated the following two questions: (a) which set of features is best suitable for the classification?; (b) what is the added value of height information, e.g. derived from stereo imagery? Using stereoscopic GeoEye and Ikonos satellite data we investigate these two questions based on our research on label tolerant classification using logistic regression and partly incorrect training data. We show that in between five and ten features can be recommended to obtain a stable solution, that height information consistently yields an improved overall classification accuracy of about 5%, and that label noise can be successfully modelled and thus only marginally influences the classification results.

  1. River reach classification for the Greater Mekong Region at high spatial resolution

    Science.gov (United States)

    Ouellet Dallaire, C.; Lehner, B.

    2014-12-01

    River classifications have been used in river health and ecological assessments as coarse proxies to represent aquatic biodiversity when comprehensive biological and/or species data is unavailable. Currently there are no river classifications or biological data available in a consistent format for the extent of the Greater Mekong Region (GMR; including the Irrawaddy, the Salween, the Chao Praya, the Mekong and the Red River basins). The current project proposes a new river habitat classification for the region, facilitated by the HydroSHEDS (HYDROlogical SHuttle Elevation Derivatives at multiple Scales) database at 500m pixel resolution. The classification project is based on the Global River Classification framework relying on the creation of multiple sub-classifications based on different disciplines. The resulting classes from the sub-classification are later combined into final classes to create a holistic river reach classification. For the GMR, a final habitat classification was created based on three sub-classifications: a hydrological sub-classification based only on discharge indices (river size and flow variability); a physio-climatic sub-classification based on large scale indices of climate and elevation (biomes, ecoregions and elevation); and a geomorphological sub-classification based on local morphology (presence of floodplains, reach gradient and sand transport). Key variables and thresholds were identified in collaboration with local experts to ensure that regional knowledge was included. The final classification is composed 54 unique final classes based on 3 sub-classifications with less than 15 classes each. The resulting classifications are driven by abiotic variables and do not include biological data, but they represent a state-of-the art product based on best available data (mostly global data). The most common river habitat type is the "dry broadleaf, low gradient, very small river". These classifications could be applied in a wide range of

  2. Analysis of the plasmodium falciparum proteome by high-accuracy mass spectrometry

    DEFF Research Database (Denmark)

    Lasonder, Edwin; Ishihama, Yasushi; Andersen, Jens S

    2002-01-01

    -accuracy (average deviation less than 0.02 Da at 1,000 Da) mass spectrometric proteome analysis of selected stages of the human malaria parasite Plasmodium falciparum. The analysis revealed 1,289 proteins of which 714 proteins were identified in asexual blood stages, 931 in gametocytes and 645 in gametes. The last...

  3. High-accuracy interferometric measurements of flatness and parallelism of a step gauge

    CSIR Research Space (South Africa)

    Kruger, OA

    2001-01-01

    Full Text Available The most commonly used method in the calibration of step gauges is the coordinate measuring machine (CMM), equipped with a laser interferometer for the highest accuracy. This paper describes a modification to a length-bar measuring machine...

  4. UAS-SfM for coastal research: Geomorphic feature extraction and land cover classification from high-resolution elevation and optical imagery

    Science.gov (United States)

    Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel

    2017-01-01

    The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.

  5. UAS-SfM for Coastal Research: Geomorphic Feature Extraction and Land Cover Classification from High-Resolution Elevation and Optical Imagery

    Directory of Open Access Journals (Sweden)

    Emily J. Sturdivant

    2017-10-01

    Full Text Available The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM photogrammetry applied to imagery acquired by unmanned aerial systems (UAS offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm have little influence on the classification accuracy.

  6. [Accuracy of placenta accreta prenatal diagnosis by ultrasound and MRI in a high-risk population].

    Science.gov (United States)

    Daney de Marcillac, F; Molière, S; Pinton, A; Weingertner, A-S; Fritz, G; Viville, B; Roedlich, M-N; Gaudineau, A; Sananes, N; Favre, R; Nisand, I; Langer, B

    2016-02-01

    Main objective was to compare accuracy of ultrasonography and MRI for antenatal diagnosis of placenta accreta. Secondary objectives were to specify the most common sonographic and RMI signs associated with diagnosis of placenta accreta. This retrospective study used data collected from all potential cases of placenta accreta (patients with an anterior placenta praevia with history of scarred uterus) admitted from 01/2010 to 12/2014 in a level III maternity unit in Strasbourg, France. High-risk patients beneficiated antenatally from ultrasonography and MRI. Sonographic signs registered were: abnormal placental lacunae, increased vascularity on color Doppler, absence of the retroplacental clear space, interrupted bladder line. MRI signs registered were: abnormal uterine bulging, intraplacental bands of low signal intensity on T2-weighted images, increased vascularity, heterogeneous signal of the placenta on T2-weighed, interrupted bladder line, protrusion of the placenta into the cervix. Diagnosis of placenta accreta was confirmed histologically after hysterectomy or clinically in case of successful conservative treatment. Twenty-two potential cases of placenta accreta were referred to our center and underwent both ultrasonography and MRI. All cases of placenta accreta had a placenta praevia associated with history of scarred uterus. Sensibility and specificity for ultrasonography were, respectively, 0.92 and 0.67, for MRI 0.84 and 0.78 without significant difference (p>0.05). The most relevant signs associated with diagnosis of placenta accreta in ultrasonography were increased vascularity on color Doppler (sensibility 0.85/specificity 0.78), abnormal placental lacunae (sensibility 0.92/specificity 0.55) and loss of retroplacental clear space (sensibility 0.76/specificity 1.0). The most relevant signs in MRI were: abnormal uterine bulging (sensitivity 0.92/specificity 0.89), dark intraplacental bands on T2-weighted images (sensitivity 0.83/specificity 0.80) or

  7. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    International Nuclear Information System (INIS)

    Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo

    2014-01-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting

  8. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)

    2014-07-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.

  9. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  10. SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation

    Directory of Open Access Journals (Sweden)

    Euiho Kim

    2017-09-01

    Full Text Available In the Federal Aviation Administration’s (FAA performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0–77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment.

  11. Automatic J–A Model Parameter Tuning Algorithm for High Accuracy Inrush Current Simulation

    Directory of Open Access Journals (Sweden)

    Xishan Wen

    2017-04-01

    Full Text Available Inrush current simulation plays an important role in many tasks of the power system, such as power transformer protection. However, the accuracy of the inrush current simulation can hardly be ensured. In this paper, a Jiles–Atherton (J–A theory based model is proposed to simulate the inrush current of power transformers. The characteristics of the inrush current curve are analyzed and results show that the entire inrush current curve can be well featured by the crest value of the first two cycles. With comprehensive consideration of both of the features of the inrush current curve and the J–A parameters, an automatic J–A parameter estimation algorithm is proposed. The proposed algorithm can obtain more reasonable J–A parameters, which improve the accuracy of simulation. Experimental results have verified the efficiency of the proposed algorithm.

  12. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    Science.gov (United States)

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  13. Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications

    OpenAIRE

    Van-Tang PHAM; Dinh-Chinh NGUYEN; Quang-Huy TRAN; Duc-Trinh CHU; Duc-Tan TRAN

    2015-01-01

    Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The se...

  14. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Directory of Open Access Journals (Sweden)

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  15. New perspectives for high accuracy SLR with second generation geodesic satellites

    Science.gov (United States)

    Lund, Glenn

    1993-01-01

    This paper reports on the accuracy limitations imposed by geodesic satellite signatures, and on the potential for achieving millimetric performances by means of alternative satellite concepts and an optimized 2-color system tradeoff. Long distance laser ranging, when performed between a ground (emitter/receiver) station and a distant geodesic satellite, is now reputed to enable short arc trajectory determinations to be achieved with an accuracy of 1 to 2 centimeters. This state-of-the-art accuracy is limited principally by the uncertainties inherent to single-color atmospheric path length correction. Motivated by the study of phenomena such as postglacial rebound, and the detailed analysis of small-scale volcanic and strain deformations, the drive towards millimetric accuracies will inevitably be felt. With the advent of short pulse (less than 50 ps) dual wavelength ranging, combined with adequate detection equipment (such as a fast-scanning streak camera or ultra-fast solid-state detectors) the atmospheric uncertainty could potentially be reduced to the level of a few millimeters, thus, exposing other less significant error contributions, of which by far the most significant will then be the morphology of the retroreflector satellites themselves. Existing geodesic satellites are simply dense spheres, several 10's of cm in diameter, encrusted with a large number (426 in the case of LAGEOS) of small cube-corner reflectors. A single incident pulse, thus, results in a significant number of randomly phased, quasi-simultaneous return pulses. These combine coherently at the receiver to produce a convolved interference waveform which cannot, on a shot to shot basis, be accurately and unambiguously correlated to the satellite center of mass. This paper proposes alternative geodesic satellite concepts, based on the use of a very small number of cube-corner retroreflectors, in which the above difficulties are eliminated while ensuring, for a given emitted pulse, the return

  16. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  17. High accuracy microwave frequency measurement based on single-drive dual-parallel Mach-Zehnder modulator

    DEFF Research Database (Denmark)

    Zhao, Ying; Pang, Xiaodan; Deng, Lei

    2011-01-01

    A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....

  18. THE EFFECT OF MODERATE AND HIGH-INTENSITY FATIGUE ON GROUNDSTROKE ACCURACY IN EXPERT AND NON-EXPERT TENNIS PLAYERS

    Directory of Open Access Journals (Sweden)

    Mark Lyons

    2013-06-01

    Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on

  19. Can high-energy proton events in solar wind be predicted via classification of precursory structures?

    Energy Technology Data Exchange (ETDEWEB)

    Hallerberg, Sarah [Chemnitz University of Technology (Germany); Ruzmaikin, Alexander; Feynman, Joan [Jet Propulsion Laboratory, California Institute of Technology (United States)

    2011-07-01

    Shock waves in the solar wind associated with solar coronal mass ejections produce fluxes of high-energy protons and ions with energies larger than 10 MeV. These fluxes present a danger to humans and electronic equipment in space, and also endanger passengers of over-pole air flights. The approaches that have been exploited for the prediction of high-energy particle events so far consist in training artificial neural networks on catalogues of events. Our approach towards this task is based on the identification of precursory structures in the fluxes of particles. In contrast to artificial neural networks that function as a ''black box'' transforming data into predictions, this classification approach can additionally provide information on relevant precursory events and thus might help to improve the understanding of underlying mechanisms of particle acceleration.

  20. High accuracy prediction of beta-turns and their types using propensities and multiple alignments.

    Science.gov (United States)

    Fuchs, Patrick F J; Alix, Alain J P

    2005-06-01

    We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym

  1. A High Throughput Ambient Mass Spectrometric Approach to Species Identification and Classification from Chemical Fingerprint Signatures

    Science.gov (United States)

    Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.

    2015-01-01

    A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes are instantaneously detected. The result is identification and species level classification based on the entire DART-MS spectrum. Here, we illustrate how the method can be used to: (1) distinguish between endangered woods regulated by the Convention for the International Trade of Endangered Flora and Fauna (CITES) treaty; (2) assess the origin and by extension the properties of biodiesel feedstocks; (3) determine insect species from analysis of puparial casings; (4) distinguish between psychoactive plants products; and (5) differentiate between Eucalyptus species. An advantage of the hierarchical clustering approach to processing of the DART-MS derived fingerprint is that it shows both similarities and differences between species based on their chemotypes. Furthermore, full knowledge of the identities of the constituents contained within the small molecule profile of analyzed samples is not required. PMID:26156000

  2. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  3. Satellite Remote Sensing of Cropland Characteristics in 30m Resolution: The First North American Continental-Scale Classification on High Performance Computing Platforms

    Science.gov (United States)

    Massey, Richard

    Cropland characteristics and accurate maps of their spatial distribution are required to develop strategies for global food security by continental-scale assessments and agricultural land use policies. North America is the major producer and exporter of coarse grains, wheat, and other crops. While cropland characteristics such as crop types are available at country-scales in North America, however, at continental-scale cropland products are lacking at fine sufficient resolution such as 30m. Additionally, applications of automated, open, and rapid methods to map cropland characteristics over large areas without the need of ground samples are needed on efficient high performance computing platforms for timely and long-term cropland monitoring. In this study, I developed novel, automated, and open methods to map cropland extent, crop intensity, and crop types in the North American continent using large remote sensing datasets on high-performance computing platforms. First, a novel method was developed in this study to fuse pixel-based classification of continental-scale Landsat data using Random Forest algorithm available on Google Earth Engine cloud computing platform with an object-based classification approach, recursive hierarchical segmentation (RHSeg) to map cropland extent at continental scale. Using the fusion method, a continental-scale cropland extent map for North America at 30m spatial resolution for the nominal year 2010 was produced. In this map, the total cropland area for North America was estimated at 275.2 million hectares (Mha). This map was assessed for accuracy using randomly distributed samples derived from United States Department of Agriculture (USDA) cropland data layer (CDL), Agriculture and Agri-Food Canada (AAFC) annual crop inventory (ACI), Servicio de Informacion Agroalimentaria y Pesquera (SIAP), Mexico's agricultural boundaries, and photo-interpretation of high-resolution imagery. The overall accuracies of the map are 93.4% with a

  4. Accuracy of applicator tip reconstruction in MRI-guided interstitial 192Ir-high-dose-rate brachytherapy of liver tumors

    International Nuclear Information System (INIS)

    Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens

    2015-01-01

    Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo

  5. Horizontal Positional Accuracy of Google Earth’s High-Resolution Imagery Archive

    Directory of Open Access Journals (Sweden)

    David Potere

    2008-12-01

    Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the Earth’s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the world’s peri-urban areas.

  6. High accuracy mapping with cartographic assessment for a fixed-wing remotely piloted aircraft system

    Science.gov (United States)

    Alves Júnior, Leomar Rufino; Ferreira, Manuel Eduardo; Côrtes, João Batista Ramos; de Castro Jorge, Lúcio André

    2018-01-01

    The lack of updated maps on large scale representations has encouraged the use of remotely piloted aircraft systems (RPAS) to generate maps for a wide range of professionals. However, some questions arise: do the orthomosaics generated by these systems have the cartographic precision required to use them? Which problems can be identified in stitching orthophotos to generate orthomosaics? To answer these questions, an aerophotogrammetric survey was conducted in an environmental conservation unit in the city of Goiânia. The flight plan was set up using the E-motion software, provided by Sensefly-a Swiss manufacturer of the RPAS Swinglet CAM used in this work. The camera installed in the RPAS was the Canon IXUS 220 HS, with the number of pixels in the sensor array of 12.1 megapixel, complementary metal oxide semiconductor 1 ∶ 2.3 ? (4000 × 3000 pixel), horizontal and vertical pixel sizes of 1.54 μm. Using the orthophotos, four orthomosaics were generated in the Pix4D mapper software. The first orthomosaic was generated without using the control points. The other three mosaics were generated using 4, 8, and 16 premarked ground control points. To check the precision and accuracy of the orthomosaics, 46 premarked targets were uniformly distributed in the block. The three-dimensional (3-D) coordinates of the premarked targets were read on the orthomosaic and compared with the coordinates obtained by the geodetic survey real-time kinematic positioning method using the global navigation satellite system receiver signals. The cartographic accuracy standard was evaluated by discrepancies between these coordinates. The bias was analyzed by the Student's t test and the accuracy by the chi-square probability considering the orthomosaic on a scale of 1 ∶ 250, in which 90% of the points tested must have a planimetric error of control points the scale was 10-fold smaller (1 ∶ 3000).

  7. High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller

    Science.gov (United States)

    Li, Yaoling; Wu, Zhong

    2018-03-01

    The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.

  8. KLEIN: Coulomb functions for real lambda and positive energy to high accuracy

    International Nuclear Information System (INIS)

    Barnett, A.R.

    1981-01-01

    KLEIN computes relativistic Schroedinger (Klein-Gordon) equation solutions, i.e. Coulomb functions for real lambda > - 1, Fsub(lambda)(eta,x), Gsub(lambda)(eta,x), F'sub(lambda)(eta,x) and G'sub(lambda)(eta,x) for real kappa > 0 and real eta, - 10 4 4 . Hence it is also suitable for Bessel and spherical Bessel functions. Accuracies are in the range 10 -14 -10 -16 in oscillating region, and approx. equal to 10 -30 on an extended precision compiler. The program is suitable for generating Klein-Gordon wavefunctions for matching in pion and kaon physics. (orig.)

  9. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    Science.gov (United States)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  10. Usefulness of High-Frequency Ultrasound in the Classification of Histologic Subtypes of Primary Basal Cell Carcinoma.

    Science.gov (United States)

    Hernández-Ibáñez, C; Blazquez-Sánchez, N; Aguilar-Bernier, M; Fúnez-Liébana, R; Rivas-Ruiz, F; de Troya-Martín, M

    Incisional biopsy may not always provide a correct classification of histologic subtypes of basal cell carcinoma (BCC). High-frequency ultrasound (HFUS) imaging of the skin is useful for the diagnosis and management of this tumor. The main aim of this study was to compare the diagnostic value of HFUS compared with punch biopsy for the correct classification of histologic subtypes of primary BCC. We also analyzed the influence of tumor size and histologic subtype (single subtype vs. mixed) on the diagnostic yield of HFUS and punch biopsy. Retrospective observational study of primary BCCs treated by the Dermatology Department of Hospital Costa del Sol in Marbella, Spain, between october 2013 and may 2014. Surgical excision was preceded by HFUS imaging (Dermascan C © , 20-MHz linear probe) and a punch biopsy in all cases. We compared the overall diagnostic yield and accuracy (sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) of HFUS and punch biopsy against the gold standard (excisional biopsy with serial sections) for overall and subgroup results. We studied 156 cases. The overall diagnostic yield was 73.7% for HFUS (sensitivity, 74.5%; specificity, 73%) and 79.9% for punch biopsy (sensitivity, 76%; specificity, 82%). In the subgroup analyses, HFUS had a PPV of 93.3% for superficial BCC (vs. 92% for punch biopsy). In the analysis by tumor size, HFUS achieved an overall diagnostic yield of 70.4% for tumors measuring 40mm 2 or less and 77.3% for larger tumors; the NPV was 82% in both size groups. Punch biopsy performed better in the diagnosis of small lesions (overall diagnostic yield of 86.4% for lesions ≤40mm 2 vs. 72.6% for lesions >40mm 2 ). HFUS imaging was particularly useful for ruling out infiltrating BCCs, diagnosing simple, superficial BCCs, and correctly classifying BCCs larger than 40mm 2 . Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Cause and Cure - Deterioration in Accuracy of CFD Simulations With Use of High-Aspect-Ratio Triangular Tetrahedral Grids

    Science.gov (United States)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the

  12. Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Gangchen Hua

    2017-01-01

    Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.

  13. The use of high accuracy NAA for the certification of NIST Standard Reference Materials

    International Nuclear Information System (INIS)

    Becker, D.A.; Greenberg, R.R.; Stone, S.

    1991-01-01

    Neutron activation analysis (NAA) is only one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). We compete daily against all of the other available analytical techniques in terms of accuracy, precision, and the cost required to obtain that requisite accuracy and precision. Over the years, the authors have found that NAA can and does compete favorably with these other techniques because of its' unique capabilities for redundancy and quality assurance. Good examples are the two new NIST leaf SRMs, Apple Leaves (SRM 1515) and Peach Leaves (SRM 1547). INAA was used to measure the homogeneity of 12 elements in 15 samples of each material at the 100 mg sample size. In addition, instrumental and radiochemical NAA combined for 27 elemental determinations, out of a total of 54 elemental determinations made on each material with all NIST techniques combined. This paper describes the NIST NAA procedures used in these analyses, the quality assurance techniques employed, and the analytical results for the 24 elements determined by NAA in these new botanical SRMs. The NAA results are also compared to the final certified values for these SRMs

  14. High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".

    Science.gov (United States)

    Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo

    2004-03-01

    Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.

  15. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  16. CLASSIFICATION OF ORTHOGNATHIC SURGERY PATIENTS INTO LOW AND HIGH BLEEDING RISK GROUPS USING THROMBELASTOGRAPHY

    DEFF Research Database (Denmark)

    Elenius Madsen, Daniel

    2012-01-01

    Title: CLASSIFICATION OF ORTHOGNATHIC SURGERY PATIENTS INTO LOW AND HIGH BLEEDING RISK GROUPS USING THROMBELASTOGRAPHY Objectives: Orthognathic surgery involves surgical manipulation of jaw and face skeletal structure. A subgroup of patients undergoing orthognathic surgery suffers from excessive...... into account the complex interplay between coagulation factors, blood platelets and components of the fibrinolytic system. Patients undergoing orthognathic surgery were included in this prospective study, and their preoperative thrombelastographic data were collected and compared to their intraoperative blood...... predictive values. An α angleex above 67o did with 95% certainty predict a blood loss below 400 mL, and a receiver-operating characteristic (ROC) curve showed an area under the curve (AUC) of 0.8. Conclusion: By means of the α angleex it is possible to separate orthognathic surgery patients according...

  17. Association between traditional clinical high-risk features and gene expression profile classification in uveal melanoma.

    Science.gov (United States)

    Nguyen, Brandon T; Kim, Ryan S; Bretana, Maria E; Kegley, Eric; Schefler, Amy C

    2018-02-01

    To evaluate the association between traditional clinical high-risk features of uveal melanoma patients and gene expression profile (GEP). This was a retrospective, single-center, case series of patients with uveal melanoma. Eighty-three patients met inclusion criteria for the study. Patients were examined for the following clinical risk factors: drusen/retinal pigment epithelium (RPE) changes, vascularity on B-scan, internal reflectivity on A-scan, subretinal fluid (SRF), orange pigment, apical tumor height/thickness, and largest basal dimensions (LBD). A novel point system was created to grade the high-risk clinical features of each tumor. Further analyses were performed to assess the degree of association between GEP and each individual risk factor, total clinical risk score, vascularity, internal reflectivity, American Joint Committee on Cancer (AJCC) tumor stage classification, apical tumor height/thickness, and LBD. Of the 83 total patients, 41 were classified as GEP class 1A, 17 as class 1B, and 25 as class 2. The presence of orange pigment, SRF, low internal reflectivity and vascularity on ultrasound, and apical tumor height/thickness ≥ 2 mm were not statistically significantly associated with GEP class. Lack of drusen/RPE changes demonstrated a trend toward statistical association with GEP class 2 compared to class 1A/1B. LBD and advancing AJCC stage was statistically associated with higher GEP class. In this cohort, AJCC stage classification and LBD were the only clinical features statistically associated with GEP class. Clinicians should use caution when inferring the growth potential of melanocytic lesions solely from traditional funduscopic and ultrasonographic risk factors without GEP data.

  18. Interobserver Variability and Accuracy of High-Definition Endoscopic Diagnosis for Gastric Intestinal Metaplasia among Experienced and Inexperienced Endoscopists

    Science.gov (United States)

    Hyun, Yil Sik; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo

    2013-01-01

    Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM. PMID:23678267

  19. Interobserver variability and accuracy of high-definition endoscopic diagnosis for gastric intestinal metaplasia among experienced and inexperienced endoscopists.

    Science.gov (United States)

    Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo

    2013-05-01

    Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.

  20. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  1. CLASSIFIER FUSION OF HIGH-RESOLUTION OPTICAL AND SYNTHETIC APERTURE RADAR (SAR SATELLITE IMAGERY FOR CLASSIFICATION IN URBAN AREA

    Directory of Open Access Journals (Sweden)

    T. Alipour Fard

    2014-10-01

    Full Text Available This study concerned with fusion of synthetic aperture radar and optical satellite imagery. Due to the difference in the underlying sensor technology, data from synthetic aperture radar (SAR and optical sensors refer to different properties of the observed scene and it is believed that when they are fused together, they complement each other to improve the performance of a particular application. In this paper, two category of features are generate and six classifier fusion operators implemented and evaluated. Implementation results show significant improvement in the classification accuracy.

  2. A proposed classification system for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1987-06-01

    This report presents a proposal for quantitative and generally applicable risk-based definitions of high-level and other radioactive wastes. On the basis of historical descriptions and definitions of high-level waste (HLW), in which HLW has been defined in terms of its source as waste from reprocessing of spent nuclear fuel, we propose a more general definition based on the concept that HLW has two distinct attributes: HLW is (1) highly radioactive and (2) requires permanent isolation. This concept leads to a two-dimensional waste classification system in which one axis, related to ''requires permanent isolation,'' is associated with long-term risks from waste disposal and the other axis, related to ''highly radioactive,'' is associated with shorter-term risks due to high levels of decay heat and external radiation. We define wastes that require permanent isolation as wastes with concentrations of radionuclides exceeding the Class-C limits that are generally acceptable for near-surface land disposal, as specified in the US Nuclear Regulatory Commission's rulemaking 10 CFR Part 61 and its supporting documentation. HLW then is waste requiring permanent isolation that also is highly radioactive, and we define ''highly radioactive'' as a decay heat (power density) in the waste greater than 50 W/m 3 or an external radiation dose rate at a distance of 1 m from the waste greater than 100 rem/h (1 Sv/h), whichever is the more restrictive. This proposal also results in a definition of Transuranic (TRU) Waste and Equivalent as waste that requires permanent isolation but is not highly radioactive and a definition of low-level waste (LLW) as waste that does not require permanent isolation without regard to whether or not it is highly radioactive

  3. Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications

    Directory of Open Access Journals (Sweden)

    Van-Tang PHAM

    2015-12-01

    Full Text Available Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The second scheme increases both the temperature working range and steady error performance of the sensor. In this scheme, we try to keep the temperature of the sensor is stable at the certain value (e.g. 25 oC by using a PID (proportional-integral-derivative controller and a heating/cooling generator. Many experiment scenarios have implemented to confirm the effectivity of these solutions.

  4. Hyperbolic Method for Dispersive PDEs: Same High-Order of Accuracy for Solution, Gradient, and Hessian

    Science.gov (United States)

    Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki

    2016-01-01

    In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.

  5. High-accuracy energy formulas for the attractive two-site Bose-Hubbard model

    Science.gov (United States)

    Ermakov, Igor; Byrnes, Tim; Bogoliubov, Nikolay

    2018-02-01

    The attractive two-site Bose-Hubbard model is studied within the framework of the analytical solution obtained by the application of the quantum inverse scattering method. The structure of the ground and excited states is analyzed in terms of solutions of Bethe equations, and an approximate solution for the Bethe roots is given. This yields approximate formulas for the ground-state energy and for the first excited-state energy. The obtained formulas work with remarkable precision for a wide range of parameters of the model, and are confirmed numerically. An expansion of the Bethe state vectors into a Fock space is also provided for evaluation of expectation values, although this does not have accuracy similar to that of the energies.

  6. Accuracy and repeatability positioning of high-performancel athe for non-circular turning

    Directory of Open Access Journals (Sweden)

    Majda Paweł

    2017-11-01

    Full Text Available This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe’s numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.

  7. Accuracy and repeatability positioning of high-performancel athe for non-circular turning

    Science.gov (United States)

    Majda, Paweł; Powałka, Bartosz

    2017-11-01

    This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe's numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.

  8. A method of high accuracy clock synchronization by frequency following with VCXO

    International Nuclear Information System (INIS)

    Ma Yichao; Wu Jie; Zhang Jie; Song Hongzhi; Kong Yang

    2011-01-01

    In this paper, the principle of the synchronous protocol of the IEEE1588 is analyzed, and the factors that affect the accuracy of synchronization is summarized. Through the hardware timer in a microcontroller, we give the exactly the time when a package is sent or received. So synchronization of the distributed clocks can reach 1 μs in this way. Another method to improve precision of the synchronization is to replace the traditional fixed frequency crystal of the slave device, which needs to follow up the master clock, by an adjustable VCXO. So it is possible to fine tune the frequency of the distributed clocks, and reduce the drift of clock, which shows great benefit for the clock synchronization. A test measurement shows the synchronization of distribute clocks can be better than 10 ns using this method, which is more accurate than the method realized by software. (authors)

  9. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  10. The influence of different classification standards of age groups on prognosis in high-grade hemispheric glioma patients.

    Science.gov (United States)

    Chen, Jian-Wu; Zhou, Chang-Fu; Lin, Zhi-Xiong

    2015-09-15

    Although age is thought to correlate with the prognosis of glioma patients, the most appropriate age-group classification standard to evaluate prognosis had not been fully studied. This study aimed to investigate the influence of age-group classification standards on the prognosis of patients with high-grade hemispheric glioma (HGG). This retrospective study of 125 HGG patients used three different classification standards of age-groups (≤ 50 and >50 years old, ≤ 60 and >60 years old, ≤ 45 and 45-65 and ≥ 65 years old) to evaluate the impact of age on prognosis. The primary end-point was overall survival (OS). The Kaplan-Meier method was applied for univariate analysis and Cox proportional hazards model for multivariate analysis. Univariate analysis showed a significant correlation between OS and all three classification standards of age-groups as well as between OS and pathological grade, gender, location of glioma, and regular chemotherapy and radiotherapy treatment. Multivariate analysis showed that the only independent predictors of OS were classification standard of age-groups ≤ 50 and > 50 years old, pathological grade and regular chemotherapy. In summary, the most appropriate classification standard of age-groups as an independent prognostic factor was ≤ 50 and > 50 years old. Pathological grade and chemotherapy were also independent predictors of OS in post-operative HGG patients. Copyright © 2015. Published by Elsevier B.V.

  11. STTR Phase I: Low-Cost, High-Accuracy, Whole-Building Carbon Dioxide Monitoring for Demand Control Ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Hallstrom, Jason; Ni, Zheng Richard

    2018-05-15

    This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a

  12. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  13. A proposed classification system for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1989-01-01

    On the basis of the definition of high-level wastes (HLW) in the Nuclear Waste Policy Act of 1982 and previous descriptions of reprocessing wastes, a definition is proposed based on the concept that HLW is any waste which is highly radioactive and requires permanent isolation. This conceptual definition of HLW leads to a two-dimensional waste classification system in which one axis, related to 'highly radioactive', is associated with shorter-term risks from waste management and disposal due to high levels of decay heat and external radiation, and the other axis, related to 'requires permanent isolation', is associated with longer-term risks from waste disposal. Wastes that are highly radioactive are defined quantitatively as wastes with a decay heat (power density) greater than 50 W/m 3 or an external dose-equivalent rate greater than 100 rem/h (1 Sv/h) at a distance of 1 m from the waste, whichever is more restrictive. Wastes that require permanent isolation are defined quantitatively as wastes with concentrations of radionuclides greater than the Class-C limits that are generally acceptable for near-surface land disposal, as obtained from the Nuclear Regulatory Commission's 10 CFR Part 61 and its associated methodology. This proposal leads to similar definitions of two other waste classes: transuranic (TRU) waste and equivalent is any waste that requires permanent isolation but is not highly radioactive; and low-level waste (LLW) is any waste that does not require permanent isolation, without regard to whether or not it is highly radioactive. 31 refs.; 3 figs.; 4 tabs

  14. Radiometric inter-sensor cross-calibration uncertainty using a traceable high accuracy reference hyperspectral imager

    Science.gov (United States)

    Gorroño, Javier; Banks, Andrew C.; Fox, Nigel P.; Underwood, Craig

    2017-08-01

    Optical earth observation (EO) satellite sensors generally suffer from drifts and biases relative to their pre-launch calibration, caused by launch and/or time in the space environment. This places a severe limitation on the fundamental reliability and accuracy that can be assigned to satellite derived information, and is particularly critical for long time base studies for climate change and enabling interoperability and Analysis Ready Data. The proposed TRUTHS (Traceable Radiometry Underpinning Terrestrial and Helio-Studies) mission is explicitly designed to address this issue through re-calibrating itself directly to a primary standard of the international system of units (SI) in-orbit and then through the extension of this SI-traceability to other sensors through in-flight cross-calibration using a selection of Committee on Earth Observation Satellites (CEOS) recommended test sites. Where the characteristics of the sensor under test allows, this will result in a significant improvement in accuracy. This paper describes a set of tools, algorithms and methodologies that have been developed and used in order to estimate the radiometric uncertainty achievable for an indicative target sensor through in-flight cross-calibration using a well-calibrated hyperspectral SI-traceable reference sensor with observational characteristics such as TRUTHS. In this study, Multi-Spectral Imager (MSI) of Sentinel-2 and Landsat-8 Operational Land Imager (OLI) is evaluated as an example, however the analysis is readily translatable to larger-footprint sensors such as Sentinel-3 Ocean and Land Colour Instrument (OLCI) and Visible Infrared Imaging Radiometer Suite (VIIRS). This study considers the criticality of the instrumental and observational characteristics on pixel level reflectance factors, within a defined spatial region of interest (ROI) within the target site. It quantifies the main uncertainty contributors in the spectral, spatial, and temporal domains. The resultant tool

  15. Molecular classification of fatty liver by high-throughput profiling of protein post-translational modifications.

    Science.gov (United States)

    Urasaki, Yasuyo; Fiscus, Ronald R; Le, Thuc T

    2016-04-01

    We describe an alternative approach to classifying fatty liver by profiling protein post-translational modifications (PTMs) with high-throughput capillary isoelectric focusing (cIEF) immunoassays. Four strains of mice were studied, with fatty livers induced by different causes, such as ageing, genetic mutation, acute drug usage, and high-fat diet. Nutrient-sensitive PTMs of a panel of 12 liver metabolic and signalling proteins were simultaneously evaluated with cIEF immunoassays, using nanograms of total cellular protein per assay. Changes to liver protein acetylation, phosphorylation, and O-N-acetylglucosamine glycosylation were quantified and compared between normal and diseased states. Fatty liver tissues could be distinguished from one another by distinctive protein PTM profiles. Fatty liver is currently classified by morphological assessment of lipid droplets, without identifying the underlying molecular causes. In contrast, high-throughput profiling of protein PTMs has the potential to provide molecular classification of fatty liver. Copyright © 2016 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  16. The Generalized Gamma-DBN for High-Resolution SAR Image Classification

    Directory of Open Access Journals (Sweden)

    Zhiqiang Zhao

    2018-06-01

    Full Text Available With the increase of resolution, effective characterization of synthetic aperture radar (SAR image becomes one of the most critical problems in many earth observation applications. Inspired by deep learning and probability mixture models, a generalized Gamma deep belief network (g Γ-DBN is proposed for SAR image statistical modeling and land-cover classification in this work. Specifically, a generalized Gamma-Bernoulli restricted Boltzmann machine (gΓB-RBM is proposed to capture high-order statistical characterizes from SAR images after introducing the generalized Gamma distribution. After stacking the g Γ B-RBM and several standard binary RBMs in a hierarchical manner, a gΓ-DBN is constructed to learn high-level representation of different SAR land-covers. Finally, a discriminative neural network is constructed by adding an additional predict layer for different land-covers over the constructed deep structure. Performance of the proposed approach is evaluated via several experiments on some high-resolution SAR image patch sets and two large-scale scenes which are captured by ALOS PALSAR-2 and COSMO-SkyMed satellites respectively.

  17. The high accuracy data processing system of laser interferometry signals based on MSP430

    Science.gov (United States)

    Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong

    2009-07-01

    Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.

  18. A new phase-shift microscope designed for high accuracy stitching interferometry

    International Nuclear Information System (INIS)

    Thomasset, Muriel; Idir, Mourad; Polack, François; Bray, Michael; Servant, Jean-Jacques

    2013-01-01

    Characterizing nanofocusing X-ray mirrors for the soon coming nano-imaging beamlines of synchrotron light sources motivates the development of new instruments with improved performances. The sensitivity and accuracy goal is now fixed well under the nm level and, at the same time, the spatial frequency range of the measurement should be pushed toward 50 mm −1 . SOLEIL synchrotron facility has therefore undertaken to equip with an interferential microscope suitable for stitching interferometry at this performance level. In order to keep control on the whole metrology chain it was decided to build a custom instrument in partnership with two small optics companies EOTECH and MBO. The new instrument is a Michelson micro-interferometer equipped with a custom-designed telecentric objective. It achieves the large depth of focus suitable for performing reliable calibrations and measurements. The concept has been validated with a predevelopment set-up, delivered in July 2010, which showed a static repeatability below 1 nm PV despite a non-thermally stabilized environment. The final instrument was delivered early this year and was installed inside SOLEIL's controlled environment facility, where thorough characterization tests are under way. Latest test results and first stitching measurements are presented

  19. Experimental study of very low permeability rocks using a high accuracy permeameter

    International Nuclear Information System (INIS)

    Larive, Elodie

    2002-01-01

    The measurement of fluid flow through 'tight' rocks is important to provide a better understanding of physical processes involved in several industrial and natural problems. These include deep nuclear waste repositories, management of aquifers, gas, petroleum or geothermal reservoirs, or earthquakes prevention. The major part of this work consisted of the design, construction and use of an elaborate experimental apparatus allowing laboratory permeability measurements (fluid flow) of very low permeability rocks, on samples at a centimetric scale, to constrain their hydraulic behaviour at realistic in-situ conditions. The accuracy permeameter allows the use of several measurement methods, the steady-state flow method, the transient pulse method, and the sinusoidal pore pressure oscillation method. Measurements were made with the pore pressure oscillation method, using different waveform periods, at several pore and confining pressure conditions, on different materials. The permeability of one natural standard, Westerly granite, and an artificial one, a micro-porous cement, were measured, and results obtained agreed with previous measurements made on these materials showing the reliability of the permeameter. A study of a Yorkshire sandstone shows a relationship between rock microstructure, permeability anisotropy and thermal cracking. Microstructure, porosity and permeability concepts, and laboratory permeability measurements specifications are presented, the permeameter is described, and then permeability results obtained on the investigated materials are reported [fr

  20. Demonstrating High-Accuracy Orbital Access Using Open-Source Tools

    Science.gov (United States)

    Gilbertson, Christian; Welch, Bryan

    2017-01-01

    Orbit propagation is fundamental to almost every space-based analysis. Currently, many system analysts use commercial software to predict the future positions of orbiting satellites. This is one of many capabilities that can replicated, with great accuracy, without using expensive, proprietary software. NASAs SCaN (Space Communication and Navigation) Center for Engineering, Networks, Integration, and Communications (SCENIC) project plans to provide its analysis capabilities using a combination of internal and open-source software, allowing for a much greater measure of customization and flexibility, while reducing recurring software license costs. MATLAB and the open-source Orbit Determination Toolbox created by Goddard Space Flight Center (GSFC) were utilized to develop tools with the capability to propagate orbits, perform line-of-sight (LOS) availability analyses, and visualize the results. The developed programs are modular and can be applied for mission planning and viability analysis in a variety of Solar System applications. The tools can perform 2 and N-body orbit propagation, find inter-satellite and satellite to ground station LOS access (accounting for intermediate oblate spheroid body blocking, geometric restrictions of the antenna field-of-view (FOV), and relativistic corrections), and create animations of planetary movement, satellite orbits, and LOS accesses. The code is the basis for SCENICs broad analysis capabilities including dynamic link analysis, dilution-of-precision navigation analysis, and orbital availability calculations.

  1. A study for high accuracy measurement of residual stress by deep hole drilling technique

    Science.gov (United States)

    Kitano, Houichi; Okano, Shigetaka; Mochizuki, Masahito

    2012-08-01

    The deep hole drilling technique (DHD) received much attention in recent years as a method for measuring through-thickness residual stresses. However, some accuracy problems occur when residual stress evaluation is performed by the DHD technique. One of the reasons is that the traditional DHD evaluation formula applies to the plane stress condition. The second is that the effects of the plastic deformation produced in the drilling process and the deformation produced in the trepanning process are ignored. In this study, a modified evaluation formula, which is applied to the plane strain condition, is proposed. In addition, a new procedure is proposed which can consider the effects of the deformation produced in the DHD process by investigating the effects in detail by finite element (FE) analysis. Then, the evaluation results obtained by the new procedure are compared with that obtained by traditional DHD procedure by FE analysis. As a result, the new procedure evaluates the residual stress fields better than the traditional DHD procedure when the measuring object is thick enough that the stress condition can be assumed as the plane strain condition as in the model used in this study.

  2. On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro

    2013-01-01

    This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV. (paper)

  3. The Performance of EEG-P300 Classification using Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Arjon Turnip

    2013-12-01

    Full Text Available Electroencephalogram (EEG recordings signal provide an important function of brain-computer communication, but the accuracy of their classification is very limited in unforeseeable signal variations relating to artifacts. In this paper, we propose a classification method entailing time-series EEG-P300 signals using backpropagation neural networks to predict the qualitative properties of a subject’s mental tasks by extracting useful information from the highly multivariate non-invasive recordings of brain activity. To test the improvement in the EEG-P300 classification performance (i.e., classification accuracy and transfer rate with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA. Finally, the result of the experiment showed that the average of the classification accuracy was 97% and the maximum improvement of the average transfer rate is 42.4%, indicating the considerable potential of the using of EEG-P300 for the continuous classification of mental tasks.

  4. High-resolution MR imaging of talar osteochondral lesions with new classification

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, James Francis; Lau, Domily Ting Yi; Yeung, David Ka Wai [Prince of Wales Hospital, Chinese University of Hong Kong, Department of Imaging and Interventional Radiology, Shatin, NT (China); Wong, Margaret Wan Nar [Prince of Wales Hospital, Chinese University of Hong Kong, Department of Orthopaedics and Traumatology, Shatin (China)

    2012-04-15

    Retrospective review of high-resolution MR imaging features of talar dome osteochondral lesions and development of new classification system based on these features. Over the past 7 years, 70 osteochondral lesions of the talar dome from 70 patients (49 males, 21 females, mean age 42 years, range 15-62 years) underwent high-resolution MR imaging with a microscopy coil at 1.5 T. Sixty-one (87%) of 70 lesions were located on the medial central aspect and ten (13%) lesions were located on the lateral central aspect of the talar dome. Features evaluated included cartilage fracture, osteochondral junction separation, subchondral bone collapse, bone:bone separation, and marrow change. Based on these findings, a new five-part grading system was developed. Signal-to-noise characteristics of microscopy coil imaging at 1.5 T were compared to dedicated ankle coil imaging at 3 T. Microscopy coil imaging at 1.5 T yielded 20% better signal-to-noise characteristics than ankle coil imaging at 3 T. High-resolution MR revealed that osteochondral junction separation, due to focal collapse of the subchondral bone, was a common feature, being present in 28 (45%) of 61 medial central osteochondral lesions. Reparative cartilage hypertrophy and bone:bone separation in the absence of cartilage fracture were also common findings. Complete osteochondral separation was uncommon. A new five-part grading system incorporating features revealed by high-resolution MR imaging was developed. High-resolution MRI reveals clinically pertinent features of talar osteochondral lesions, which should help comprehension of symptomatology and enhance clinical decision-making. These features were incorporated in a new MR-based grading system. Whenever possible, symptomatic talar osteochondral lesions should be assessed by high-resolution MR imaging. (orig.)

  5. High-resolution MR imaging of talar osteochondral lesions with new classification

    International Nuclear Information System (INIS)

    Griffith, James Francis; Lau, Domily Ting Yi; Yeung, David Ka Wai; Wong, Margaret Wan Nar

    2012-01-01

    Retrospective review of high-resolution MR imaging features of talar dome osteochondral lesions and development of new classification system based on these features. Over the past 7 years, 70 osteochondral lesions of the talar dome from 70 patients (49 males, 21 females, mean age 42 years, range 15-62 years) underwent high-resolution MR imaging with a microscopy coil at 1.5 T. Sixty-one (87%) of 70 lesions were located on the medial central aspect and ten (13%) lesions were located on the lateral central aspect of the talar dome. Features evaluated included cartilage fracture, osteochondral junction separation, subchondral bone collapse, bone:bone separation, and marrow change. Based on these findings, a new five-part grading system was developed. Signal-to-noise characteristics of microscopy coil imaging at 1.5 T were compared to dedicated ankle coil imaging at 3 T. Microscopy coil imaging at 1.5 T yielded 20% better signal-to-noise characteristics than ankle coil imaging at 3 T. High-resolution MR revealed that osteochondral junction separation, due to focal collapse of the subchondral bone, was a common feature, being present in 28 (45%) of 61 medial central osteochondral lesions. Reparative cartilage hypertrophy and bone:bone separation in the absence of cartilage fracture were also common findings. Complete osteochondral separation was uncommon. A new five-part grading system incorporating features revealed by high-resolution MR imaging was developed. High-resolution MRI reveals clinically pertinent features of talar osteochondral lesions, which should help comprehension of symptomatology and enhance clinical decision-making. These features were incorporated in a new MR-based grading system. Whenever possible, symptomatic talar osteochondral lesions should be assessed by high-resolution MR imaging. (orig.)

  6. A ROUGH SET DECISION TREE BASED MLP-CNN FOR VERY HIGH RESOLUTION REMOTELY SENSED IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2017-09-01

    Full Text Available Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP, which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  7. ISPA - a high accuracy X-ray and gamma camera Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    ISPA offers ... Ten times better resolution than Anger cameras High efficiency single gamma counting Noise reduction by sensitivity to gamma energy ...for Single Photon Emission Computed Tomography (SPECT)

  8. Anatomy of the minor fissure: assessment with high-resolution CT and classification

    International Nuclear Information System (INIS)

    Ariyuerek, Macit O.; Yelgec, Selcuk N.; Guelsuen, Meltem; Karabulut, Nevzat

    2002-01-01

    The aims of this study were to investigate the anatomy of the minor fissure and its variations on high-resolution CT (HRCT) sections and to propose a detailed classification. The prospective study included 67 patients who were referred to CT for various indications. High-resolution CT examinations (1.5-mm collimation) were obtained through the region of the minor fissure. The CT scans were assessed for the presence, completeness, and configuration of the minor fissure. Various configurations of the minor fissure were classified into four major types, based on whether the highest portion of the middle lobe upper surface was medial (type I), lateral (type II), posterior (type III), or central (type IV). Minor fissure was identified in 65 (97%) of 67 patients, and absent in 2 (3%) cases. The fissure was incomplete in 35 (54%) of 65 patients. Type-I minor fissure is seen in 28 (43%) patients, type II in 22 (34%), type III in 5 (8%), and type IV in 2 (3%) patients. Because the majority of the fissure was absent in 8 (12%) of 35 patients with incomplete fissure, they were considered indeterminate. Comprehensive knowledge of the various configurations of the minor fissure is helpful in correct localization of a lesion and its extension. In equivocal cases, limited thin-section CT scans through the fissure delineate the anatomy more clearly and provide greater degree of precision in localizing pulmonary lesions. (orig.)

  9. Object-oriented classification of land use in urban areas applying very high resolution satellite data

    International Nuclear Information System (INIS)

    Bauer, T.B.

    2001-08-01

    The availability of the new very high resolution satellite imagery will offer a wide range of new applications in the field of remote sensing. Information about actual land use is an important task for the management and planning in urban areas. High resolution satellite data will be an alternative to aerial photographs for updating and maintaining cartographic and geographic databases at reduced costs. The aim of the research is to formalize the visual interpretation procedure in order to automate the whole process. The assumption underlying this approach is that the land use functions can be distinguished on the basis of the differences in spatial distribution and pattern of land cover forms. Therefore a two-stage classification procedure is applied. In a first stage a land cover map is produced. In a second stage the morphological properties and spatial patterns of the land cover objects are analyzed with the structural analyzing and mapping system leading to a characterization and description of distinct urban land use categories. This information is then used for building a rule system that is implemented in a new commercial software tool called eCognition. An object-oriented classifier applies the rules to the land cover objects resulting in the required land use map. The potential of this method is demonstrated in a case study using IKONOS data covering a part of the metropolitan area of Vienna. (author)

  10. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    Science.gov (United States)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  11. Random forests for classification in ecology

    Science.gov (United States)

    Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J.

    2007-01-01

    Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature. ?? 2007 by the Ecological Society of America.

  12. Gated viewing and high-accuracy three-dimensional laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    , a high PRF of 32 kHz, and a high-speed camera with gate times down to 200 ps and delay steps down to 100 ps. The electronics and the software also allow for gated viewing with automatic gain control versus range, whereby foreground backscatter can be suppressed. We describe our technique for the rapid...

  13. High-accuracy measurement of ship velocities by DGPS; DGPS ni yoru sensoku keisoku no koseidoka ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, S; Koterayama, W [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1996-04-10

    The differential global positioning system (DGPS) can eliminate most of errors in ship velocity measurement by GPS positioning alone. Through two rounds of marine observations by towing an observation robot in summer 1995, the authors attempted high-accuracy measurement of ship velocities by DGPS, and also carried out both positioning by GPS alone and measurement using the bottom track of ADCP (acoustic Doppler current profiler). In this paper, the results obtained by these measurement methods were examined through comparison among them, and the accuracy of the measured ship velocities was considered. In DGPS measurement, both translocation method and interference positioning method were used. ADCP mounted on the observation robot allowed measurement of the velocity of current meter itself by its bottom track in shallow sea areas less than 350m. As the result of these marine observations, it was confirmed that the accuracy equivalent to that of direct measurement by bottom track is possible to be obtained by DGPS. 3 refs., 5 figs., 1 tab.

  14. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  15. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Science.gov (United States)

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  16. High construal level can help negotiators to reach integrative agreements: The role of information exchange and judgement accuracy.

    Science.gov (United States)

    Wening, Stefanie; Keith, Nina; Abele, Andrea E

    2016-06-01

    In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.

  17. Does Chicago Classification address Symptom Correlation with High-resolution Esophageal Manometry?

    Science.gov (United States)

    Jain, Mayank; Srinivas, Melpakkam; Bawane, Piyush; Venkataraman, Jayanthi

    2017-01-01

    To assess the correlation of symptoms with findings on esophageal high-resolution manometry (HRM) in Indian patients. Prospective data collection of all patients undergoing esophageal manometry was done at two centers in India-Indore and Chennai-over a period of 18 months. Symptom profile of the study group was divided into four: Motor dysphagia, noncardiac chest pain (NCCP), gastroesophageal reflux (GER), and esophageal belchers. The symptoms were correlated with manometric findings. Of the study group (154), 35.71% patients had a normal study, while major and minor peristaltic disorders were noted in 31.16 and 33.76% respectively. In patients with symptoms of dysphagia, achalasia cardia was the commonest cause (45.1%), followed by ineffective esophageal motility (IEM) (22.53%) and normal study (19.71%). In patients with NCCP, normal peristalsis (50%) and ineffective motility (31.25%) formed the major diagnosis. Of the 56 patients with GER symptoms, 26 (46.4%) had normal manometry. An equal number had ineffective motility. Of the 11 esophageal belchers, 7 (63.6%) of these had a normal study and 3 had major motility disorder. Dysphagia was the only symptom to have a high likelihood ratio and positive predictive value to pick up major motility disorder. Dysphagia correlates with high chance to pick up a major peristaltic abnormality in motor dysphagia. The role of manometry in other symptoms in Indian setting needs to be ascertained by larger studies. The present study highlights lack of symptom correlation with manometry findings in Indian patients. How to cite this article: Jain M, Srinivas M, Bawane P, Venkataraman J. Does Chicago Classification address Symptom Correlation with High-resolution Esophageal Manometry? Euroasian J Hepato-Gastroenterol 2017;7(2):122-125.

  18. Neutrino mass from cosmology: impact of high-accuracy measurement of the Hubble constant

    Energy Technology Data Exchange (ETDEWEB)

    Sekiguchi, Toyokazu [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Ichikawa, Kazuhide [Department of Micro Engineering, Kyoto University, Kyoto 606-8501 (Japan); Takahashi, Tomo [Department of Physics, Saga University, Saga 840-8502 (Japan); Greenhill, Lincoln, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: kazuhide@me.kyoto-u.ac.jp, E-mail: tomot@cc.saga-u.ac.jp, E-mail: greenhill@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2010-03-01

    Non-zero neutrino mass would affect the evolution of the Universe in observable ways, and a strong constraint on the mass can be achieved using combinations of cosmological data sets. We focus on the power spectrum of cosmic microwave background (CMB) anisotropies, the Hubble constant H{sub 0}, and the length scale for baryon acoustic oscillations (BAO) to investigate the constraint on the neutrino mass, m{sub ν}. We analyze data from multiple existing CMB studies (WMAP5, ACBAR, CBI, BOOMERANG, and QUAD), recent measurement of H{sub 0} (SHOES), with about two times lower uncertainty (5 %) than previous estimates, and recent treatments of BAO from the Sloan Digital Sky Survey (SDSS). We obtained an upper limit of m{sub ν} < 0.2eV (95 % C.L.), for a flat ΛCDM model. This is a 40 % reduction in the limit derived from previous H{sub 0} estimates and one-third lower than can be achieved with extant CMB and BAO data. We also analyze the impact of smaller uncertainty on measurements of H{sub 0} as may be anticipated in the near term, in combination with CMB data from the Planck mission, and BAO data from the SDSS/BOSS program. We demonstrate the possibility of a 5σ detection for a fiducial neutrino mass of 0.1 eV or a 95 % upper limit of 0.04 eV for a fiducial of m{sub ν} = 0 eV. These constraints are about 50 % better than those achieved without external constraint. We further investigate the impact on modeling where the dark-energy equation of state is constant but not necessarily -1, or where a non-flat universe is allowed. In these cases, the next-generation accuracies of Planck, BOSS, and 1 % measurement of H{sub 0} would all be required to obtain the limit m{sub ν} < 0.05−0.06 eV (95 % C.L.) for the fiducial of m{sub ν} = 0 eV. The independence of systematics argues for pursuit of both BAO and H{sub 0} measurements.

  19. Challenges in high accuracy surface replication for micro optics and micro fluidics manufacture

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo

    2014-01-01

    Patterning the surface of polymer components with microstructured geometries is employed in optical and microfluidic applications. Mass fabrication of polymer micro structured products is enabled by replication technologies such as injection moulding. Micro structured tools are also produced...... by replication technologies such as nickel electroplating. All replication steps are enabled by a high precision master and high reproduction fidelity to ensure that the functionalities associated with the design are transferred to the final component. Engineered surface micro structures can be either...

  20. High Classification Rates for Continuous Cow Activity Recognition using Low-cost GPS Positioning Sensors and Standard Machine Learning Techniques

    DEFF Research Database (Denmark)

    Godsk, Torben; Kjærgaard, Mikkel Baun

    2011-01-01

    activities. By preprocessing the raw cow position data, we obtain high classification rates using standard machine learning techniques to recognize cow activities. Our objectives were to (i) determine to what degree it is possible to robustly recognize cow activities from GPS positioning data, using low...... and their activities manually logged to serve as ground truth. For our dataset we managed to obtain an average classification success rate of 86.2% of the four activities: eating/seeking (90.0%), walking (100%), lying (76.5%), and standing (75.8%) by optimizing both the preprocessing of the raw GPS data...

  1. Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment

    DEFF Research Database (Denmark)

    Andersen, Mikkel Skovgaard; Gergely, Áron; Al-Hamdani, Zyad K.

    2017-01-01

    of detecting features with a size of less than 1 m2. The derived high-resolution DEM was applied for detection and classification of geomorphometric and morphological features within the natural environment of the study area. Initially, the bathymetric position index (BPI) and the slope of the DEM were used...... area into six specific types of morphological features (i.e. subtidal channel, intertidal flat, intertidal creek, linear bar, swash bar and beach dune). The developed classification method is adapted and applied to a specific case, but it can also be implemented in other cases and environments....

  2. Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA (Final)

    Science.gov (United States)

    EPA announced the availability of the final report,Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA . This report and associated land use/land cover (LULC) coverage is the result o...

  3. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  4. A content analysis of the quantity and accuracy of dietary supplement information found in magazines with high adolescent readership.

    Science.gov (United States)

    Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth

    2009-02-01

    The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.

  5. Interethnic differences in the accuracy of anthropometric indicators of obesity in screening for high risk of coronary heart disease

    Science.gov (United States)

    Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE

    2009-01-01

    Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159

  6. High accuracy and precision micro injection moulding of thermoplastic elastomers micro ring production

    DEFF Research Database (Denmark)

    Calaon, Matteo; Tosello, Guido; Elsborg, René

    2016-01-01

    The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance...

  7. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  8. High Accuracy Three-dimensional Simulation of Micro Injection Moulded Parts

    DEFF Research Database (Denmark)

    Tosello, Guido; Costa, F. S.; Hansen, Hans Nørgaard

    2011-01-01

    Micro injection moulding (μIM) is the key replication technology for high precision manufacturing of polymer micro products. Data analysis and simulations on micro-moulding experiments have been conducted during the present validation study. Detailed information about the μIM process was gathered...

  9. Crop Type Classification Using Vegetation Indices of RapidEye Imagery

    Science.gov (United States)

    Ustuner, M.; Sanli, F. B.; Abdikan, S.; Esetlili, M. T.; Kurucu, Y.

    2014-09-01

    Cutting-edge remote sensing technology has a significant role for managing the natural resources as well as the any other applications about the earth observation. Crop monitoring is the one of these applications since remote sensing provides us accurate, up-to-date and cost-effective information about the crop types at the different temporal and spatial resolution. In this study, the potential use of three different vegetation indices of RapidEye imagery on crop type classification as well as the effect of each indices on classification accuracy were investigated. The Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Normalized Difference Red Edge Index (NDRE) are the three vegetation indices used in this study since all of these incorporated the near-infrared (NIR) band. RapidEye imagery is highly demanded and preferred for agricultural and forestry applications since it has red-edge and NIR bands. The study area is located in Aegean region of Turkey. Radial Basis Function (RBF) kernel was used here for the Support Vector Machines (SVMs) classification. Original bands of RapidEye imagery were excluded and classification was performed with only three vegetation indices. The contribution of each indices on image classification accuracy was also tested with single band classification. Highest classification accuracy of 87, 46 % was obtained using three vegetation indices. This obtained classification accuracy is higher than the classification accuracy of any dual-combination of these vegetation indices. Results demonstrate that NDRE has the highest contribution on classification accuracy compared to the other vegetation indices and the RapidEye imagery can get satisfactory results of classification accuracy without original bands.

  10. Museum genomics: low-cost and high-accuracy genetic data from historical specimens.

    Science.gov (United States)

    Rowe, Kevin C; Singhal, Sonal; Macmanes, Matthew D; Ayroles, Julien F; Morelli, Toni Lyn; Rubidge, Emily M; Bi, Ke; Moritz, Craig C

    2011-11-01

    Natural history collections are unparalleled repositories of geographical and temporal variation in faunal conditions. Molecular studies offer an opportunity to uncover much of this variation; however, genetic studies of historical museum specimens typically rely on extracting highly degraded and chemically modified DNA samples from skins, skulls or other dried samples. Despite this limitation, obtaining short fragments of DNA sequences using traditional PCR amplification of DNA has been the primary method for genetic study of historical specimens. Few laboratories have succeeded in obtaining genome-scale sequences from historical specimens and then only with considerable effort and cost. Here, we describe a low-cost approach using high-throughput next-generation sequencing to obtain reliable genome-scale sequence data from a traditionally preserved mammal skin and skull using a simple extraction protocol. We show that single-nucleotide polymorphisms (SNPs) from the genome sequences obtained independently from the skin and from the skull are highly repeatable compared to a reference genome. © 2011 Blackwell Publishing Ltd.

  11. Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system

    Science.gov (United States)

    Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.

    2017-11-01

    Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.

  12. High-accuracy alignment based on atmospherical dispersion - technological approaches and solutions for the dual-wavelength transmitter

    International Nuclear Information System (INIS)

    Burkhard, Boeckem

    1999-01-01

    In the course of the progressive developments of sophisticated geodetic systems utilizing electromagnetic waves in the visible or near IR-range a more detailed knowledge of the propagation medium and coevally solutions of atmospherically induced limitations will become important. An alignment system based on atmospherical dispersion, called a dispersometer, is a metrological solution to the atmospherically induced limitations, in optical alignment and direction observations of high accuracy. In the dispersometer we are using the dual-wavelength method for dispersive air to obtain refraction compensated angle measurements, the detrimental impact of atmospheric turbulence notwithstanding. The principle of the dual-wavelength method utilizes atmospherical dispersion, i.e. the wavelength dependence of the refractive index. The difference angle between two light beams of different wavelengths, which is called the dispersion angle Δβ, is to first approximation proportional to the refraction angle: β IR ν(β blue - β IR ) = ν Δβ, this equation implies that the dispersion angle has to be measured at least 42 times more accurate than the desired accuracy of the refraction angle for the wavelengths used in the present dispersometer. This required accuracy constitutes one major difficulty for the instrumental performance in applying the dispersion effect. However, the dual-wavelength method can only be successfully used in an optimized transmitter-receiver combination. Beyond the above mentioned resolution requirement for the detector, major difficulties in instrumental realization arise in the availability of a suitable dual-wavelength laser light source, laser light modulation with a very high extinction ratio and coaxial emittance of mono-mode radiation at both wavelengths. Therefore, this paper focuses on the solutions of the dual-wavelength transmitter introducing a new hardware approach and a complete re-design of the in [1] proposed conception of the dual

  13. Land-Use and Land-Cover Mapping Using a Gradable Classification Method

    Directory of Open Access Journals (Sweden)

    Keigo Kitada

    2012-05-01

    Full Text Available Conventional spectral-based classification methods have significant limitations in the digital classification of urban land-use and land-cover classes from high-resolution remotely sensed data because of the lack of consideration given to the spatial properties of images. To recognize the complex distribution of urban features in high-resolution image data, texture information consisting of a group of pixels should be considered. Lacunarity is an index used to characterize different texture appearances. It is often reported that the land-use and land-cover in urban areas can be effectively classified using the lacunarity index with high-resolution images. However, the applicability of the maximum-likelihood approach for hybrid analysis has not been reported. A more effective approach that employs the original spectral data and lacunarity index can be expected to improve the accuracy of the classification. A new classification procedure referred to as “gradable classification method” is proposed in this study. This method improves the classification accuracy in incremental steps. The proposed classification approach integrates several classification maps created from original images and lacunarity maps, which consist of lacnarity values, to create a new classification map. The results of this study confirm the suitability of the gradable classification approach, which produced a higher overall accuracy (68% and kappa coefficient (0.64 than those (65% and 0.60, respectively obtained with the maximum-likelihood approach.

  14. Research the Impacts of Classification Accuracy after Orthorectification with Different Grid Density DSM/DEM%不同格网密度的DSM/DEM对影像分类精度的影响研究

    Institute of Scientific and Technical Information of China (English)

    刘晓宏; 雷兵; 谭海; 郭建华

    2017-01-01

    DSM/DEM elevation data are used as assistant data to eliminate or limit deformation of terrain in orthorectification without control points .However , the grid density of DSM/DEM has different effect on subsequent processing , such as image classification . Based on this problem , we apply ChinaDSM 15 m DSM, ASTER GDEM 30 m DEM and SRTM 90 m DEM to do orthorectification on ZY-3 image.Then, classifying the orthorectified image by support vector machines (SVM), and comparing the classification accura-cy.It is shown that the classification accuracy after ChinaDSM 15 m DSM orthorectificated , with the same resample method ,is better than ASTER GDEM 30 m DEM and SRTM 90 m DEM.%在无控制点的卫星影像正射校正中,大多采用DSM/DEM数据作为辅助数据来消除或限制因地形起伏引起的形变,然而经不同格网密度的DSM/DEM正射校正后的影像对后续处理会产生不同程度的影响,如对地物分类精度产生影响.针对这一问题,本文分别采用不同的DSM/DEM数据(ChinaDSM 15 m、ASTER GDEM 30 m和SRTM 90 m)对资源三号影像进行正射校正,然后对正射校正后影像利用支持向量机进行分类,比较正射校正后影像结果的分类精度.结果表明:在相同重采样方法下,影像经ChinaDSM 15 m DSM正射校正后结果的分类精度优于ASTER GDEM 30 m DEM和SRTM 90 m DEM.

  15. Modelling and Control of Stepper Motors for High Accuracy Positioning Systems Used in Radioactive Environments

    CERN Document Server

    Picatoste Ruilope, Ricardo; Masi, Alessandro

    Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatl...

  16. The study of optimization on process parameters of high-accuracy computerized numerical control polishing

    Science.gov (United States)

    Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu

    2017-09-01

    Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.

  17. High accuracy injection circuit for the calibration of a large pixel sensor matrix

    International Nuclear Information System (INIS)

    Quartieri, E.; Comotti, D.; Manghisoni, M.

    2013-01-01

    Semiconductor pixel detectors, for particle tracking and vertexing in high energy physics experiments as well as for X-ray imaging, in particular for synchrotron light sources and XFELs, require a large area sensor matrix. This work will discuss the design and the characterization of a high-linearity, low dispersion injection circuit to be used for pixel-level calibration of detector readout electronics in a large pixel sensor matrix. The circuit provides a useful tool for the characterization of the readout electronics of the pixel cell unit for both monolithic active pixel sensors and hybrid pixel detectors. In the latter case, the circuit allows for precise analogue test of the readout channel already at the chip level, when no sensor is connected. Moreover, it provides a simple means for calibration of readout electronics once the detector has been connected to the chip. Two injection techniques can be provided by the circuit: one for a charge sensitive amplification and the other for a transresistance readout channel. The aim of the paper is to describe the architecture and the design guidelines of the calibration circuit, which has been implemented in a 130 nm CMOS technology. Moreover, experimental results of the proposed injection circuit will be presented in terms of linearity and dispersion

  18. Accuracy of an automated system for tuberculosis detection on chest radiographs in high-risk screening.

    Science.gov (United States)

    Melendez, J; Hogeweg, L; Sánchez, C I; Philipsen, R H H M; Aldridge, R W; Hayward, A C; Abubakar, I; van Ginneken, B; Story, A

    2018-05-01

    Tuberculosis (TB) screening programmes can be optimised by reducing the number of chest radiographs (CXRs) requiring interpretation by human experts. To evaluate the performance of computerised detection software in triaging CXRs in a high-throughput digital mobile TB screening programme. A retrospective evaluation of the software was performed on a database of 38 961 postero-anterior CXRs from unique individuals seen between 2005 and 2010, 87 of whom were diagnosed with TB. The software generated a TB likelihood score for each CXR. This score was compared with a reference standard for notified active pulmonary TB using receiver operating characteristic (ROC) curve and localisation ROC (LROC) curve analyses. On ROC curve analysis, software specificity was 55.71% (95%CI 55.21-56.20) and negative predictive value was 99.98% (95%CI 99.95-99.99), at a sensitivity of 95%. The area under the ROC curve was 0.90 (95%CI 0.86-0.93). Results of the LROC curve analysis were similar. The software could identify more than half of the normal images in a TB screening setting while maintaining high sensitivity, and may therefore be used for triage.

  19. A three axis turntable's online initial state measurement method based on the high-accuracy laser gyro SINS

    Science.gov (United States)

    Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu

    2016-10-01

    As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.

  20. High-accuracy measurement and compensation of grating line-density error in a tiled-grating compressor

    Science.gov (United States)

    Zhao, Dan; Wang, Xiao; Mu, Jie; Li, Zhilin; Zuo, Yanlei; Zhou, Song; Zhou, Kainan; Zeng, Xiaoming; Su, Jingqin; Zhu, Qihua

    2017-02-01

    The grating tiling technology is one of the most effective means to increase the aperture of the gratings. The line-density error (LDE) between sub-gratings will degrade the performance of the tiling gratings, high accuracy measurement and compensation of the LDE are of significance to improve the output pulses characteristics of the tiled-grating compressor. In this paper, the influence of LDE on the output pulses of the tiled-grating compressor is quantitatively analyzed by means of numerical simulation, the output beams drift and output pulses broadening resulting from the LDE are presented. Based on the numerical results we propose a compensation method to reduce the degradations of the tiled grating compressor by applying angular tilt error and longitudinal piston error at the same time. Moreover, a monitoring system is setup to measure the LDE between sub-gratings accurately and the dispersion variation due to the LDE is also demonstrated based on spatial-spectral interference. In this way, we can realize high-accuracy measurement and compensation of the LDE, and this would provide an efficient way to guide the adjustment of the tiling gratings.

  1. High resolution critical habitat mapping and classification of tidal freshwater wetlands in the ACE Basin

    Science.gov (United States)

    Strickland, Melissa Anne

    In collaboration with the South Carolina Department of Natural Resources ACE Basin National Estuarine Research Reserve (ACE Basin NERR), the tidal freshwater ecosystems along the South Edisto River in the ACE Basin are being accurately mapped and classified using a LIDAR-Remote Sensing Fusion technique that integrates LAS LIDAR data into texture images and then merges the elevation textures and multispectral imagery for very high resolution mapping. This project discusses the development and refinement of an ArcGIS Toolbox capable of automating protocols and procedures for marsh delineation and microhabitat identification. The result is a high resolution habitat and land use map used for the identification of threatened habitat. Tidal freshwater wetlands are also a critical habitat for colonial wading birds and an accurate assessment of community diversity and acreage of this habitat type in the ACE Basin will support SCDNR's conservation and protection efforts. The maps developed by this study will be used to better monitor the freshwater/saltwater interface and establish a baseline for an ACE NERR monitoring program to track the rates and extent of alterations due to projected environmental stressors. Preliminary ground-truthing in the field will provide information about the accuracy of the mapping tool.

  2. High accuracy velocity control method for the french moving-coil watt balance

    International Nuclear Information System (INIS)

    Topcu, Suat; Chassagne, Luc; Haddad, Darine; Alayli, Yasser; Juncar, Patrick

    2004-01-01

    We describe a novel method of velocity control dedicated to the French moving-coil watt balance. In this project, a coil has to move in a magnetic field at a velocity of 2 mm s -1 with a relative uncertainty of 10 -9 over 60 mm. Our method is based on the use of both a heterodyne Michelson's interferometer, a two-level translation stage, and a homemade high frequency phase-shifting electronic circuit. To quantify the stability of the velocity, the output of the interferometer is sent into a frequency counter and the Doppler frequency shift is recorded. The Allan standard deviation has been used to calculate the stability and a σ y (τ) of about 2.2x10 -9 over 400 s has been obtained

  3. Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals

    Science.gov (United States)

    Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei

    2018-01-01

    Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.

  4. A high-accuracy image registration algorithm using phase-only correlation for dental radiographs

    International Nuclear Information System (INIS)

    Ito, Koichi; Nikaido, Akira; Aoki, Takafumi; Kosuge, Eiko; Kawamata, Ryota; Kashima, Isamu

    2008-01-01

    Dental radiographs have been used for the accurate assessment and treatment of dental diseases. The nonlinear deformation between two dental radiographs may be observed, even if they are taken from the same oral regions of the subject. For an accurate diagnosis, the complete geometric registration between radiographs is required. This paper presents an efficient dental radiograph registration algorithm using Phase-Only Correlation (POC) function. The use of phase components in 2D (two-dimensional) discrete Fourier transforms of dental radiograph images makes possible to achieve highly robust image registration and recognition. Experimental evaluation using a dental radiograph database indicates that the proposed algorithm exhibits efficient recognition performance even for distorted radiographs. (author)

  5. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption

    Science.gov (United States)

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  6. Accuracy of W' Recovery Kinetics in High Performance Cyclists - Modelling Intermittent Work Capacity.

    Science.gov (United States)

    Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I

    2017-10-16

    With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τ W' ) may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τ W ' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τ W ' values were compared to equivalent SKIBA 2 τ W ' values to assess the difference in recovery rates for this population. Plotting modified τ W ' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τ W' values against the SKIBA 2 τ W' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τ W' to DCP relationship was best described by a power function: τ W' =2287.2∗D CP -0.688 (R 2 = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τ W' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τ W' relationships may be necessary in order to maximise the model's validity.

  7. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  8. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    Science.gov (United States)

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database

  9. Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies

    International Nuclear Information System (INIS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer; Humboldt-Universitaet, Berlin

    2016-04-01

    We discuss the determination of the strong coupling α_M_S(m_Z) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α_s(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α_s=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α_s∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  10. Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies

    Energy Technology Data Exchange (ETDEWEB)

    Dalla Brida, Mattia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Fritzsch, Patrick [Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica UAM/CSIC; Korzec, Tomasz [Wuppertal Univ. (Germany). Dept. of Physics; Ramos, Alberto [CERN - European Organization for Nuclear Research, Geneva (Switzerland). Theory Div.; Sint, Stefan [Trinity College Dublin (Ireland). School of Mathematics; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Collaboration: ALPHA Collaboration

    2016-04-15

    We discuss the determination of the strong coupling α{sub MS}(m{sub Z}) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α{sub s}(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α{sub s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α{sub s}∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  11. High-accuracy X-ray detector calibration based on cryogenic radiometry

    Science.gov (United States)

    Krumrey, M.; Cibik, L.; Müller, P.

    2010-06-01

    Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.

  12. High-accuracy X-ray detector calibration based on cryogenic radiometry

    International Nuclear Information System (INIS)

    Krumrey, M.; Cibik, L.; Mueller, P.

    2010-01-01

    Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.

  13. High-accuracy local positioning network for the alignment of the Mu2e experiment.

    Energy Technology Data Exchange (ETDEWEB)

    Hejdukova, Jana B. [Czech Technical Univ., Prague (Czech Republic)

    2017-06-01

    This Diploma thesis describes the establishment of a high-precision local positioning network and accelerator alignment for the Mu2e physics experiment. The process of establishing new network consists of few steps: design of the network, pre-analysis, installation works, measurements of the network and making adjustments. Adjustments were performed using two approaches. First is a geodetic approach of taking into account the Earth’s curvature and the metrological approach of a pure 3D Cartesian system on the other side. The comparison of those two approaches is performed and evaluated in the results and compared with expected differences. The effect of the Earth’s curvature was found to be significant for this kind of network and should not be neglected. The measurements were obtained with Absolute Tracker AT401, leveling instrument Leica DNA03 and gyrotheodolite DMT Gyromat 2000. The coordinates of the points of the reference network were determined by the Least Square Meth od and the overall view is attached as Annexes.

  14. Diagnostic accuracy for X-ray chest in interstitial lung disease as confirmed by high resolution computed tomography (HRCT) chest

    International Nuclear Information System (INIS)

    Afzal, F.; Raza, S.; Shafique, M.

    2017-01-01

    Objective: To determine the diagnostic accuracy of x-ray chest in interstitial lung disease as confirmed by high resolution computed tomography (HRCT) chest. Study Design: A cross-sectional validational study. Place and Duration of Study: Department of Diagnostic Radiology, Combined Military Hospital Rawalpindi, from Oct 2013 to Apr 2014. Material and Method: A total of 137 patients with clinical suspicion of interstitial lung disease (ILD) aged 20-50 years of both genders were included in the study. Patients with h/o previous histopathological diagnosis, already taking treatment and pregnant females were excluded. All the patients had chest x-ray and then HRCT. The x-ray and HRCT findings were recorded as presence or absence of the ILD. Results: Mean age was 40.21 ± 4.29 years. Out of 137 patients, 79 (57.66 percent) were males and 58 (42.34 percent) were females with male to female ratio of 1.36:1. Chest x-ray detected ILD in 80 (58.39 percent) patients, out of which, 72 (true positive) had ILD and 8 (false positive) had no ILD on HRCT. Overall sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of chest x-ray in diagnosing ILD was 80.0 percent, 82.98 percent, 90.0 percent, 68.42 percent and 81.02 percent respectively. Conclusion: This study concluded that chest x-ray is simple, non-invasive, economical and readily available alternative to HRCT with an acceptable diagnostic accuracy of 81 percent in the diagnosis of ILD. (author)

  15. Social power and recognition of emotional prosody: High power is associated with lower recognition accuracy than low power.

    Science.gov (United States)

    Uskul, Ayse K; Paulmann, Silke; Weick, Mario

    2016-02-01

    Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).

  16. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  17. Solubility behavior and biopharmaceutical classification of novel high-solubility ciprofloxacin and norfloxacin pharmaceutical derivatives.

    Science.gov (United States)

    Breda, Susana A; Jimenez-Kairuz, Alvaro F; Manzo, Ruben H; Olivera, María E

    2009-04-17

    The hydrochlorides of the 1:3 aluminum:norfloxacin and aluminum:ciprofloxacin complexes were characterized according to the Biopharmaceutics Classification System (BCS) premises in comparison with their parent compounds. The pH-solubility profiles of the complexes were experimentally determined at 25 and 37 degrees C in the range of pH 1-8 and compared to that of uncomplexed norfloxacin and ciprofloxacin. Both complexes are clearly more soluble than the antibiotics themselves, even at the lowest solubility pHs. The increase in solubility was ascribed to the species controlling solubility, which were analyzed in the solid phases at equilibrium at selected pHs. Additionally, permeability was set as low, based on data reported in the scientific literature regarding oral bioavailability, intestinal and cell cultures permeabilities and also considering the influence of stoichiometric amounts of aluminum. The complexes fulfill the BCS criterion to be classified as class 3 compounds (high solubility/low permeability). Instead, the active pharmaceutical ingredients (APIs) currently used in solid dosage forms, norfloxacin and ciprofloxacin hydrochloride, proved to be BCS class 4 (low solubility/low permeability). The solubility improvement turns the complexes as potential biowaiver candidates from the scientific point of view and may be a good way for developing more dose-efficient formulations. An immediate release tablet showing very rapid dissolution was obtained. Its dissolution profile was compared to that of the commercial ciprofloxacin hydrochloride tablets allowing to dissolution of the complete dose at a critical pH such as 6.8.

  18. [The physiological classification of human thermal states under high environmental temperatures].

    Science.gov (United States)

    Bobrov, A F; Kuznets, E I

    1995-01-01

    The paper deals with the physiological classification of human thermal states in a hot environment. A review of the basic systems of classifications of thermal states is given, their main drawbacks are discussed. On the basis of human functional state research in a broad range of environmental temperatures the system of evaluation and classification of human thermal states is proposed. New integral one-dimensional multi-parametric criteria for evaluation are used. For the development of these criteria methods of factor, cluster and canonical correlation analyses are applied. Stochastic nomograms capable of identification of human thermal state for different intensity of influence are given. In this case evaluation of intensity is estimated according to one-dimensional criteria taking into account environmental temperature, physical load and time of man's staying in overheating conditions.

  19. Customer and performance rating in QFD using SVM classification

    Science.gov (United States)

    Dzulkifli, Syarizul Amri; Salleh, Mohd Najib Mohd; Leman, A. M.

    2017-09-01

    In a classification problem, where each input is associated to one output. Training data is used to create a model which predicts values to the true function. SVM is a popular method for binary classification due to their theoretical foundation and good generalization performance. However, when trained with noisy data, the decision hyperplane might deviate from optimal position because of the sum of misclassification errors in the objective function. In this paper, we introduce fuzzy in weighted learning approach for improving the accuracy of Support Vector Machine (SVM) classification. The main aim of this work is to determine appropriate weighted for SVM to adjust the parameters of learning method from a given set of noisy input to output data. The performance and customer rating in Quality Function Deployment (QFD) is used as our case study to determine implementing fuzzy SVM is highly scalable for very large data sets and generating high classification accuracy.

  20. Comparison of Aerosol Classification From Airborne High Spectral Resolution Lidar and the CALIPSO Vertical Feature Mask

    Science.gov (United States)

    Burton, Sharon P.; Ferrare, Rich A.; Omar, Ali H.; Vaughan, Mark A.; Rogers, Raymond R.; Hostetler, Chris a.; Hair, Johnathan W.; Obland, Michael D.; Butler, Carolyn F.; Cook, Anthony L.; hide

    2012-01-01

    Knowledge of aerosol composition and vertical distribution is crucial for assessing the impact of aerosols on climate. In addition, aerosol classification is a key input to CALIOP aerosol retrievals, since CALIOP requires an inference of the lidar ratio in order to estimate the effects of aerosol extinction and backscattering. In contrast, the NASA airborne HSRL-1 directly measures both aerosol extinction and backscatter, and therefore the lidar ratio (extinction-to-backscatter ratio). Four aerosol intensive properties from HSRL-1 are combined to infer aerosol type. Aerosol classification results from HSRL-1 are used here to validate the CALIOP aerosol type inferences.

  1. A diabetes dashboard and physician efficiency and accuracy in accessing data needed for high-quality diabetes care.

    Science.gov (United States)

    Koopman, Richelle J; Kochendorfer, Karl M; Moore, Joi L; Mehr, David R; Wakefield, Douglas S; Yadamsuren, Borchuluun; Coberly, Jared S; Kruse, Robin L; Wakefield, Bonnie J; Belden, Jeffery L

    2011-01-01

    We compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care. We performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and "think-aloud" interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology. The mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P dashboard (P dashboard (P dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies.

  2. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    Science.gov (United States)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  3. Enhancing the Classification Accuracy of IP Geolocation

    Science.gov (United States)

    2013-10-01

    accurately identify the geographic location of Internet devices has signficant implications for online- advertisers, application developers , network...Real Media, Comedy Central, Netflix and Spotify) and target advertising (e.g., Google). More re- cently, IP geolocation techniques have been deployed...distance to delay function and how they triangulate the position of the target. Statistical Geolocation [14] develops a joint probability density

  4. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  5. ROOF TYPE SELECTION BASED ON PATCH-BASED CLASSIFICATION USING DEEP LEARNING FOR HIGH RESOLUTION SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    T. Partovi

    2017-05-01

    Full Text Available 3D building reconstruction from remote sensing image data from satellites is still an active research topic and very valuable for 3D city modelling. The roof model is the most important component to reconstruct the Level of Details 2 (LoD2 for a building in 3D modelling. While the general solution for roof modelling relies on the detailed cues (such as lines, corners and planes extracted from a Digital Surface Model (DSM, the correct detection of the roof type and its modelling can fail due to low quality of the DSM generated by dense stereo matching. To reduce dependencies of roof modelling on DSMs, the pansharpened satellite images as a rich resource of information are used in addition. In this paper, two strategies are employed for roof type classification. In the first one, building roof types are classified in a state-of-the-art supervised pre-trained convolutional neural network (CNN framework. In the second strategy, deep features from deep layers of different pre-trained CNN model are extracted and then an RBF kernel using SVM is employed to classify the building roof type. Based on roof complexity of the scene, a roof library including seven types of roofs is defined. A new semi-automatic method is proposed to generate training and test patches of each roof type in the library. Using the pre-trained CNN model does not only decrease the computation time for training significantly but also increases the classification accuracy.

  6. Assessment of high precision, high accuracy Inductively Coupled Plasma-Optical Emission Spectroscopy to obtain concentration uncertainties less than 0.2% with variable matrix concentrations

    International Nuclear Information System (INIS)

    Rabb, Savelas A.; Olesik, John W.

    2008-01-01

    The ability to obtain high precision, high accuracy measurements in samples with complex matrices using High Performance I