WorldWideScience

Sample records for combining robust feature

  1. Robust object tracking combining color and scale invariant features

    Science.gov (United States)

    Zhang, Shengping; Yao, Hongxun; Gao, Peipei

    2010-07-01

    Object tracking plays a very important role in many computer vision applications. However its performance will significantly deteriorate due to some challenges in complex scene, such as pose and illumination changes, clustering background and so on. In this paper, we propose a robust object tracking algorithm which exploits both global color and local scale invariant (SIFT) features in a particle filter framework. Due to the expensive computation cost of SIFT features, the proposed tracker adopts a speed-up variation of SIFT, SURF, to extract local features. Specially, the proposed method first finds matching points between the target model and target candidate, than the weight of the corresponding particle based on scale invariant features is computed as the the proportion of matching points of that particle to matching points of all particles, finally the weight of the particle is obtained by combining weights of color and SURF features with a probabilistic way. The experimental results on a variety of challenging videos verify that the proposed method is robust to pose and illumination changes and is significantly superior to the standard particle filter tracker and the mean shift tracker.

  2. Research on improving image recognition robustness by combining multiple features with associative memory

    Science.gov (United States)

    Guo, Dongwei; Wang, Zhe

    2018-05-01

    Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.

  3. Biometric feature embedding using robust steganography technique

    Science.gov (United States)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  4. Radiometric Normalization of Temporal Images Combining Automatic Detection of Pseudo-Invariant Features from the Distance and Similarity Spectral Measures, Density Scatterplot Analysis, and Robust Regression

    Directory of Open Access Journals (Sweden)

    Ana Paula Ferreira de Carvalho

    2013-05-01

    Full Text Available Radiometric precision is difficult to maintain in orbital images due to several factors (atmospheric conditions, Earth-sun distance, detector calibration, illumination, and viewing angles. These unwanted effects must be removed for radiometric consistency among temporal images, leaving only land-leaving radiances, for optimum change detection. A variety of relative radiometric correction techniques were developed for the correction or rectification of images, of the same area, through use of reference targets whose reflectance do not change significantly with time, i.e., pseudo-invariant features (PIFs. This paper proposes a new technique for radiometric normalization, which uses three sequential methods for an accurate PIFs selection: spectral measures of temporal data (spectral distance and similarity, density scatter plot analysis (ridge method, and robust regression. The spectral measures used are the spectral angle (Spectral Angle Mapper, SAM, spectral correlation (Spectral Correlation Mapper, SCM, and Euclidean distance. The spectral measures between the spectra at times t1 and t2 and are calculated for each pixel. After classification using threshold values, it is possible to define points with the same spectral behavior, including PIFs. The distance and similarity measures are complementary and can be calculated together. The ridge method uses a density plot generated from images acquired on different dates for the selection of PIFs. In a density plot, the invariant pixels, together, form a high-density ridge, while variant pixels (clouds and land cover changes are spread, having low density, facilitating its exclusion. Finally, the selected PIFs are subjected to a robust regression (M-estimate between pairs of temporal bands for the detection and elimination of outliers, and to obtain the optimal linear equation for a given set of target points. The robust regression is insensitive to outliers, i.e., observation that appears to deviate

  5. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  6. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  7. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  8. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion.

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-29

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  9. Robust Features Of Surface Electromyography Signal

    International Nuclear Information System (INIS)

    Sabri, M I; Miskon, M F; Yaacob, M R

    2013-01-01

    Nowadays, application of robotics in human life has been explored widely. Robotics exoskeleton system are one of drastically areas in recent robotic research that shows mimic impact in human life. These system have been developed significantly to be used for human power augmentation, robotics rehabilitation, human power assist, and haptic interaction in virtual reality. This paper focus on solving challenges in problem using neural signals and extracting human intent. Commonly, surface electromyography signal (sEMG) are used in order to control human intent for application exoskeleton robot. But the problem lies on difficulty of pattern recognition of the sEMG features due to high noises which are electrode and cable motion artifact, electrode noise, dermic noise, alternating current power line interface, and other noise came from electronic instrument. The main objective in this paper is to study the best features of electromyography in term of time domain (statistical analysis) and frequency domain (Fast Fourier Transform).The secondary objectives is to map the relationship between torque and best features of muscle unit activation potential (MaxPS and RMS) of biceps brachii. This project scope use primary data of 2 male sample subject which using same dominant hand (right handed), age between 20–27 years old, muscle diameter 32cm to 35cm and using single channel muscle (biceps brachii muscle). The experiment conduct 2 times repeated task of contraction and relaxation of biceps brachii when lifting different load from no load to 3kg with ascending 1kg The result shows that Fast Fourier Transform maximum power spectrum (MaxPS) has less error than mean value of reading compare to root mean square (RMS) value. Thus, Fast Fourier Transform maximum power spectrum (MaxPS) show the linear relationship against torque experience by elbow joint to lift different load. As the conclusion, the best features is MaxPS because it has the lowest error than other features and

  10. Robust Features Of Surface Electromyography Signal

    Science.gov (United States)

    Sabri, M. I.; Miskon, M. F.; Yaacob, M. R.

    2013-12-01

    Nowadays, application of robotics in human life has been explored widely. Robotics exoskeleton system are one of drastically areas in recent robotic research that shows mimic impact in human life. These system have been developed significantly to be used for human power augmentation, robotics rehabilitation, human power assist, and haptic interaction in virtual reality. This paper focus on solving challenges in problem using neural signals and extracting human intent. Commonly, surface electromyography signal (sEMG) are used in order to control human intent for application exoskeleton robot. But the problem lies on difficulty of pattern recognition of the sEMG features due to high noises which are electrode and cable motion artifact, electrode noise, dermic noise, alternating current power line interface, and other noise came from electronic instrument. The main objective in this paper is to study the best features of electromyography in term of time domain (statistical analysis) and frequency domain (Fast Fourier Transform).The secondary objectives is to map the relationship between torque and best features of muscle unit activation potential (MaxPS and RMS) of biceps brachii. This project scope use primary data of 2 male sample subject which using same dominant hand (right handed), age between 20-27 years old, muscle diameter 32cm to 35cm and using single channel muscle (biceps brachii muscle). The experiment conduct 2 times repeated task of contraction and relaxation of biceps brachii when lifting different load from no load to 3kg with ascending 1kg The result shows that Fast Fourier Transform maximum power spectrum (MaxPS) has less error than mean value of reading compare to root mean square (RMS) value. Thus, Fast Fourier Transform maximum power spectrum (MaxPS) show the linear relationship against torque experience by elbow joint to lift different load. As the conclusion, the best features is MaxPS because it has the lowest error than other features and show

  11. Robust emotion recognition using spectral and prosodic features

    CERN Document Server

    Rao, K Sreenivasa

    2013-01-01

    In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.

  12. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  13. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  14. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  15. Improving scale invariant feature transform-based descriptors with shape-color alliance robust feature

    Science.gov (United States)

    Wang, Rui; Zhu, Zhengdan; Zhang, Liang

    2015-05-01

    Constructing appropriate descriptors for interest points in image matching is a critical aspect task in computer vision and pattern recognition. A method as an extension of the scale invariant feature transform (SIFT) descriptor called shape-color alliance robust feature (SCARF) descriptor is presented. To address the problem that SIFT is designed mainly for gray images and lack of global information for feature points, the proposed approach improves the SIFT descriptor by means of a concentric-rings model, as well as integrating the color invariant space and shape context with SIFT to construct the SCARF descriptor. The SCARF method developed is more robust than the conventional SIFT with respect to not only the color and photometrical variations but also the measuring similarity as a global variation between two shapes. A comparative evaluation of different descriptors is carried out showing that the SCARF approach provides better results than the other four state-of-the-art related methods.

  16. Robust modal curvature features for identifying multiple damage in beams

    Science.gov (United States)

    Ostachowicz, Wiesław; Xu, Wei; Bai, Runbo; Radzieński, Maciej; Cao, Maosen

    2014-03-01

    Curvature mode shape is an effective feature for damage detection in beams. However, it is susceptible to measurement noise, easily impairing its advantage of sensitivity to damage. To deal with this deficiency, this study formulates an improved curvature mode shape for multiple damage detection in beams based on integrating a wavelet transform (WT) and a Teager energy operator (TEO). The improved curvature mode shape, termed the WT - TEO curvature mode shape, has inherent capabilities of immunity to noise and sensitivity to damage. The proposed method is experimentally validated by identifying multiple cracks in cantilever steel beams with the mode shapes acquired using a scanning laser vibrometer. The results demonstrate that the improved curvature mode shape can identify multiple damage accurately and reliably, and it is fairly robust to measurement noise.

  17. Estimating nonrigid motion from inconsistent intensity with robust shape features

    International Nuclear Information System (INIS)

    Liu, Wenyang; Ruan, Dan

    2013-01-01

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided

  18. Robust and efficient method for matching features in omnidirectional images

    Science.gov (United States)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  19. Robust Automatic Speech Recognition Features using Complex Wavelet Packet Transform Coefficients

    Directory of Open Access Journals (Sweden)

    TjongWan Sen

    2009-11-01

    Full Text Available To improve the performance of phoneme based Automatic Speech Recognition (ASR in noisy environment; we developed a new technique that could add robustness to clean phonemes features. These robust features are obtained from Complex Wavelet Packet Transform (CWPT coefficients. Since the CWPT coefficients represent all different frequency bands of the input signal, decomposing the input signal into complete CWPT tree would also cover all frequencies involved in recognition process. For time overlapping signals with different frequency contents, e. g. phoneme signal with noises, its CWPT coefficients are the combination of CWPT coefficients of phoneme signal and CWPT coefficients of noises. The CWPT coefficients of phonemes signal would be changed according to frequency components contained in noises. Since the numbers of phonemes in every language are relatively small (limited and already well known, one could easily derive principal component vectors from clean training dataset using Principal Component Analysis (PCA. These principal component vectors could be used then to add robustness and minimize noises effects in testing phase. Simulation results, using Alpha Numeric 4 (AN4 from Carnegie Mellon University and NOISEX-92 examples from Rice University, showed that this new technique could be used as features extractor that improves the robustness of phoneme based ASR systems in various adverse noisy conditions and still preserves the performance in clean environments.

  20. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  1. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    Science.gov (United States)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  2. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  3. Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    Full Text Available To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object, while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object. Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method.

  4. Comparison of Point and Line Features and Their Combination for Rigid Body Motion Estimation

    DEFF Research Database (Denmark)

    Pilz, Florian; Pugeault, Nicolas; Krüger, Norbert

    2009-01-01

    evaluate and compare the results using line and point features as 3D-2D constraints and we discuss the qualitative advantages and disadvantages of both feature types for RBM estimation. We also demonstrate an improvement in robustness through the combination of these features on large data sets...

  5. Shape adaptive, robust iris feature extraction from noisy iris images.

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  6. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  7. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    Science.gov (United States)

    Kwon, Minseok

    , the regulation of time constant update for filters in signal/control path as well as level-independent frequency glides with fixed frequency modulation. First, we scrutinized performance development in keyword recognition using the proposed methods in quiet and noise-corrupted environments. The results argue that multi-scale integration should be used along with CE in order to avoid ambiguous continuity in unvoiced segments. Moreover, the inclusion of the all modifications was observed to guarantee the noise-type-independent robustness particularly with severe interference. Moreover, the CASA with the auditory model was implemented into a single/dual-channel ASR using reference TIMIT corpus so as to get more general result. Hidden Markov model (HTK) toolkit was used for phone recognition in various environmental conditions. In a single-channel ASR, the results argue that unmasked acoustic features (unmasked GFCC) should combine with target estimates from the mask to compensate for missing information. From the observation of a dual-channel ASR, the combined GFCC guarantees the highest performance regardless of interferences within speech. Moreover, consistent improvement of noise robustness by GFCC (unmasked or combined) shows the validity of our proposed CASA implementation in dual microphone system. In conclusion, the proposed framework proves the robustness of the acoustic features in various background interferences via both direct distance evaluation and statistical assessment. In addition, the introduction of dual microphone system using the framework in this study shows the potential of the effective implementation of the auditory model-based CASA in ASR.

  8. Efficient Generation and Selection of Combined Features for Improved Classification

    KAUST Repository

    Shono, Ahmad N.

    2014-05-01

    This study contributes a methodology and associated toolkit developed to allow users to experiment with the use of combined features in classification problems. Methods are provided for efficiently generating combined features from an original feature set, for efficiently selecting the most discriminating of these generated combined features, and for efficiently performing a preliminary comparison of the classification results when using the original features exclusively against the results when using the selected combined features. The potential benefit of considering combined features in classification problems is demonstrated by applying the developed methodology and toolkit to three sample data sets where the discovery of combined features containing new discriminating information led to improved classification results.

  9. Robustness of radiomic breast features of benign lesions and luminal A cancers across MR magnet strengths

    Science.gov (United States)

    Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.

    2018-02-01

    Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.

  10. Robust Sensor-Orientation-Independent Feature Selection for Animal Activity Recognition on Collar Tags

    NARCIS (Netherlands)

    Kamminga, Jacob Wilhelm; Le Viet Duc, Duc Viet; Meijers, Jan Pieter; Bisby, Helena C.; Meratnia, Nirvana; Havinga, Paul J.M.

    2018-01-01

    Fundamental challenges faced by real-time animal activity recognition include variation in motion data due to changing sensor orientations, numerous features, and energy and processing constraints of animal tags. This paper aims at finding small optimal feature sets that are lightweight and robust

  11. Efficient Generation and Selection of Combined Features for Improved Classification

    KAUST Repository

    Shono, Ahmad N.

    2014-01-01

    This study contributes a methodology and associated toolkit developed to allow users to experiment with the use of combined features in classification problems. Methods are provided for efficiently generating combined features from an original

  12. Multiscale registration of remote sensing image using robust SIFT features in Steerable-Domain

    Directory of Open Access Journals (Sweden)

    Xiangzeng Liu

    2011-12-01

    Full Text Available This paper proposes a multiscale registration technique using robust Scale Invariant Feature Transform (SIFT features in Steerable-Domain, which can deal with the large variations of scale, rotation and illumination between images. First, a new robust SIFT descriptor is presented, which is invariant under affine transformation. Then, an adaptive similarity measure is developed according to the robust SIFT descriptor and the adaptive normalized cross correlation of feature point’s neighborhood. Finally, the corresponding feature points can be determined by the adaptive similarity measure in Steerable-Domain of the two input images, and the final refined transformation parameters determined by using gradual optimization are adopted to achieve the registration results. Quantitative comparisons of our algorithm with the related methods show a significant improvement in the presence of large scale, rotation changes, and illumination contrast. The effectiveness of the proposed method is demonstrated by the experimental results.

  13. Omnidirectional sparse visual path following with occlusion-robust feature tracking

    OpenAIRE

    Goedemé, Toon; Tuytelaars, Tinne; Van Gool, Luc; Vanacker, Gerolf; Nuttin, Marnix

    2005-01-01

    Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Omnidirectional sparse visual path following with occlusion-robust feature tracking'', Proceedings 6th workshop on omnidirectional vision, camera networks and non-classical cameras, 8 pp., October 21, 2005, Beijing, China.

  14. Robust combined position and formation control for marine surface craft

    DEFF Research Database (Denmark)

    Ihle, Ivar-Andre F.; Jouffroy, Jerome; Fossen, Thor I.

    We consider the robustness properties of a formation control system for marine surface vessels. Intervessel constraint functions are stabilized to achieve the desired formation configuration. We show that the formation dynamics is Input-to-State Stable (ISS) to both environmental perturbations th...

  15. Robust Optimization in Simulation : Taguchi and Krige Combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2009-01-01

    Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  16. Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain

    Directory of Open Access Journals (Sweden)

    Shijun Xiang

    2017-01-01

    Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.

  17. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Position-Invariant Robust Features for Long-Term Recognition of Dynamic Outdoor Scenes

    Science.gov (United States)

    Kawewong, Aram; Tangruamsub, Sirinart; Hasegawa, Osamu

    A novel Position-Invariant Robust Feature, designated as PIRF, is presented to address the problem of highly dynamic scene recognition. The PIRF is obtained by identifying existing local features (i.e. SIFT) that have a wide baseline visibility within a place (one place contains more than one sequential images). These wide-baseline visible features are then represented as a single PIRF, which is computed as an average of all descriptors associated with the PIRF. Particularly, PIRFs are robust against highly dynamical changes in scene: a single PIRF can be matched correctly against many features from many dynamical images. This paper also describes an approach to using these features for scene recognition. Recognition proceeds by matching an individual PIRF to a set of features from test images, with subsequent majority voting to identify a place with the highest matched PIRF. The PIRF system is trained and tested on 2000+ outdoor omnidirectional images and on COLD datasets. Despite its simplicity, PIRF offers a markedly better rate of recognition for dynamic outdoor scenes (ca. 90%) than the use of other features. Additionally, a robot navigation system based on PIRF (PIRF-Nav) can outperform other incremental topological mapping methods in terms of time (70% less) and memory. The number of PIRFs can be reduced further to reduce the time while retaining high accuracy, which makes it suitable for long-term recognition and localization.

  19. Robustness of digitally modulated signal features against variation in HF noise model

    Directory of Open Access Journals (Sweden)

    Shoaib Mobien

    2011-01-01

    Full Text Available Abstract High frequency (HF band has both military and civilian uses. It can be used either as a primary or backup communication link. Automatic modulation classification (AMC is of an utmost importance in this band for the purpose of communications monitoring; e.g., signal intelligence and spectrum management. A widely used method for AMC is based on pattern recognition (PR. Such a method has two main steps: feature extraction and classification. The first step is generally performed in the presence of channel noise. Recent studies show that HF noise could be modeled by Gaussian or bi-kappa distributions, depending on day-time. Therefore, it is anticipated that change in noise model will have impact on features extraction stage. In this article, we investigate the robustness of well known digitally modulated signal features against variation in HF noise. Specifically, we consider temporal time domain (TTD features, higher order cumulants (HOC, and wavelet based features. In addition, we propose new features extracted from the constellation diagram and evaluate their robustness against the change in noise model. This study is targeting 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, and 64QAM modulations, as they are commonly used in HF communications.

  20. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  1. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.

  2. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  3. Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate

    Science.gov (United States)

    Padmanaban, Subash; Baker, Justin; Greger, Bradley

    2018-01-01

    Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding

  4. Robust and fast license plate detection based on the fusion of color and edge feature

    Science.gov (United States)

    Cai, De; Shi, Zhonghan; Liu, Jin; Hu, Chuanping; Mei, Lin; Qi, Li

    2014-11-01

    Extracting a license plate is an important stage in automatic vehicle identification. The degradation of images and the computation intense make this task difficult. In this paper, a robust and fast license plate detection based on the fusion of color and edge feature is proposed. Based on the dichromatic reflection model, two new color ratios computed from the RGB color model are introduced and proved to be two color invariants. The global color feature extracted by the new color invariants improves the method's robustness. The local Sobel edge feature guarantees the method's accuracy. In the experiment, the detection performance is good. The detection results show that this paper's method is robust to the illumination, object geometry and the disturbance around the license plates. The method can also detect license plates when the color of the car body is the same as the color of the plates. The processing time for image size of 1000x1000 by pixels is nearly 0.2s. Based on the comparison, the performance of the new ratios is comparable to the common used HSI color model.

  5. Mutual Information Based Dynamic Integration of Multiple Feature Streams for Robust Real-Time LVCSR

    Science.gov (United States)

    Sato, Shoei; Kobayashi, Akio; Onoe, Kazuo; Homma, Shinichi; Imai, Toru; Takagi, Tohru; Kobayashi, Tetsunori

    We present a novel method of integrating the likelihoods of multiple feature streams, representing different acoustic aspects, for robust speech recognition. The integration algorithm dynamically calculates a frame-wise stream weight so that a higher weight is given to a stream that is robust to a variety of noisy environments or speaking styles. Such a robust stream is expected to show discriminative ability. A conventional method proposed for the recognition of spoken digits calculates the weights front the entropy of the whole set of HMM states. This paper extends the dynamic weighting to a real-time large-vocabulary continuous speech recognition (LVCSR) system. The proposed weight is calculated in real-time from mutual information between an input stream and active HMM states in a searchs pace without an additional likelihood calculation. Furthermore, the mutual information takes the width of the search space into account by calculating the marginal entropy from the number of active states. In this paper, we integrate three features that are extracted through auditory filters by taking into account the human auditory system's ability to extract amplitude and frequency modulations. Due to this, features representing energy, amplitude drift, and resonant frequency drifts, are integrated. These features are expected to provide complementary clues for speech recognition. Speech recognition experiments on field reports and spontaneous commentary from Japanese broadcast news showed that the proposed method reduced error words by 9.2% in field reports and 4.7% in spontaneous commentaries relative to the best result obtained from a single stream.

  6. Feature-Learning-Based Printed Circuit Board Inspection via Speeded-Up Robust Features and Random Forest

    Directory of Open Access Journals (Sweden)

    Eun Hye Yuk

    2018-06-01

    Full Text Available With the coming of the 4th industrial revolution era, manufacturers produce high-tech products. As the production process is refined, inspection technologies become more important. Specifically, the inspection of a printed circuit board (PCB, which is an indispensable part of electronic products, is an essential step to improve the quality of the process and yield. Image processing techniques are utilized for inspection, but there are limitations because the backgrounds of images are different and the kinds of defects increase. In order to overcome these limitations, methods based on machine learning have been used recently. These methods can inspect without a normal image by learning fault patterns. Therefore, this paper proposes a method can detect various types of defects using machine learning. The proposed method first extracts features through speeded-up robust features (SURF, then learns the fault pattern and calculates probabilities. After that, we generate a weighted kernel density estimation (WKDE map weighted by the probabilities to consider the density of the features. Because the probability of the WKDE map can detect an area where the defects are concentrated, it improves the performance of the inspection. To verify the proposed method, we apply the method to PCB images and confirm the performance of the method.

  7. Hierarchical Fuzzy Feature Similarity Combination for Presentation Slide Retrieval

    Directory of Open Access Journals (Sweden)

    A. Kushki

    2009-02-01

    Full Text Available This paper proposes a novel XML-based system for retrieval of presentation slides to address the growing data mining needs in presentation archives for educational and scholarly settings. In particular, contextual information, such as structural and formatting features, is extracted from the open format XML representation of presentation slides. In response to a textual user query, each extracted feature is used to compute a fuzzy relevance score for each slide in the database. The fuzzy scores from the various features are then combined through a hierarchical scheme to generate a single relevance score per slide. Various fusion operators and their properties are examined with respect to their effect on retrieval performance. Experimental results indicate a significant increase in retrieval performance measured in terms of precision-recall. The improvements are attributed to both the incorporation of the contextual features and the hierarchical feature combination scheme.

  8. Robust classification of motor imagery EEG signals using statistical time–domain features

    International Nuclear Information System (INIS)

    Khorshidtalab, A; Salami, M J E; Hamedi, M

    2013-01-01

    The tradeoff between computational complexity and speed, in addition to growing demands for real-time BMI (brain–machine interface) systems, expose the necessity of applying methods with least possible complexity. Willison amplitude (WAMP) and slope sign change (SSC) are two promising time–domain features only if the right threshold value is defined for them. To overcome the drawback of going through trial and error for the determination of a suitable threshold value, modified WAMP and modified SSC are proposed in this paper. Besides, a comprehensive assessment of statistical time–domain features in which their effectiveness is evaluated with a support vector machine (SVM) is presented. To ensure the accuracy of the results obtained by the SVM, the performance of each feature is reassessed with supervised fuzzy C-means. The general assessment shows that every subject had at least one of his performances near or greater than 80%. The obtained results prove that for BMI applications, in which a few errors can be tolerated, these combinations of feature–classifier are suitable. Moreover, features that could perform satisfactorily were selected for feature combination. Combinations of the selected features are evaluated with the SVM, and they could significantly improve the results, in some cases, up to full accuracy. (paper)

  9. Haar-like Features for Robust Real-Time Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    Face recognition is still a very challenging task when the input face image is noisy, occluded by some obstacles, of very low-resolution, not facing the camera, and not properly illuminated. These problems make the feature extraction and consequently the face recognition system unstable....... The proposed system in this paper introduces the novel idea of using Haar-like features, which have commonly been used for object detection, along with a probabilistic classifier for face recognition. The proposed system is simple, real-time, effective and robust against most of the mentioned problems....... Experimental results on public databases show that the proposed system indeed outperforms the state-of-the-art face recognition systems....

  10. Combining low level features and visual attributes for VHR remote sensing image classification

    Science.gov (United States)

    Zhao, Fumin; Sun, Hao; Liu, Shuai; Zhou, Shilin

    2015-12-01

    Semantic classification of very high resolution (VHR) remote sensing images is of great importance for land use or land cover investigation. A large number of approaches exploiting different kinds of low level feature have been proposed in the literature. Engineers are often frustrated by their conclusions and a systematic assessment of various low level features for VHR remote sensing image classification is needed. In this work, we firstly perform an extensive evaluation of eight features including HOG, dense SIFT, SSIM, GIST, Geo color, LBP, Texton and Tiny images for classification of three public available datasets. Secondly, we propose to transfer ground level scene attributes to remote sensing images. Thirdly, we combine both low-level features and mid-level visual attributes to further improve the classification performance. Experimental results demonstrate that i) Dene SIFT and HOG features are more robust than other features for VHR scene image description. ii) Visual attribute competes with a combination of low level features. iii) Multiple feature combination achieves the best performance under different settings.

  11. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  12. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  13. Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.

    Directory of Open Access Journals (Sweden)

    Clément Bailly

    Full Text Available This study aimed to investigate the variability of textural features (TF as a function of acquisition and reconstruction parameters within the context of multi-centric trials.The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV or the absolute deviation (for the noise study was derived and compared statistically with SUVmax and SUVmean results.The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP. When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE.Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.

  14. Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.

    Science.gov (United States)

    Bailly, Clément; Bodet-Milin, Caroline; Couespel, Solène; Necib, Hatem; Kraeber-Bodéré, Françoise; Ansquer, Catherine; Carlier, Thomas

    2016-01-01

    This study aimed to investigate the variability of textural features (TF) as a function of acquisition and reconstruction parameters within the context of multi-centric trials. The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV) or the absolute deviation (for the noise study) was derived and compared statistically with SUVmax and SUVmean results. The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP). When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE. Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.

  15. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    Science.gov (United States)

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system

  16. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    Science.gov (United States)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  17. Fast and robust generation of feature maps for region-based visual attention.

    Science.gov (United States)

    Aziz, Muhammad Zaheer; Mertsching, Bärbel

    2008-05-01

    Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.

  18. Robust feature estimation by non-rigid hierarchical image registration and its application in disparity measurement

    Science.gov (United States)

    Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan

    2017-03-01

    Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.

  19. Local appearance features for robust MRI brain structure segmentation across scanning protocols

    DEFF Research Database (Denmark)

    Achterberg, H.C.; Poot, Dirk H. J.; van der Lijn, Fedde

    2013-01-01

    Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require...... a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution...... with substantially different imaging protocols and on different scanners. While a combination of conventional appearance features trained on data from a different scanner with multiatlas segmentation performed poorly with an average Dice overlap of 0.698, the local appearance model based on the new acquisition...

  20. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    Science.gov (United States)

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  1. Combining shallow and deep processing for a robust, fast, deep-linguistic dependency parser

    OpenAIRE

    Schneider, G

    2004-01-01

    This paper describes Pro3Gres, a fast, robust, broad-coverage parser that delivers deep-linguistic grammatical relation structures as output, which are closer to predicate-argument structures and more informative than pure constituency structures. The parser stays as shallow as is possible for each task, combining shallow and deep-linguistic methods by integrating chunking and by expressing the majority of long-distance dependencies in a context-free way. It combines statistical and rule-base...

  2. Combining heterogenous features for 3D hand-held object recognition

    Science.gov (United States)

    Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang

    2014-10-01

    Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.

  3. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    Science.gov (United States)

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  4. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    Science.gov (United States)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  5. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  6. A similarity measure method combining location feature for mammogram retrieval.

    Science.gov (United States)

    Wang, Zhiqiong; Xin, Junchang; Huang, Yukun; Li, Chen; Xu, Ling; Li, Yang; Zhang, Hao; Gu, Huizi; Qian, Wei

    2018-05-28

    Breast cancer, the most common malignancy among women, has a high mortality rate in clinical practice. Early detection, diagnosis and treatment can reduce the mortalities of breast cancer greatly. The method of mammogram retrieval can help doctors to find the early breast lesions effectively and determine a reasonable feature set for image similarity measure. This will improve the accuracy effectively for mammogram retrieval. This paper proposes a similarity measure method combining location feature for mammogram retrieval. Firstly, the images are pre-processed, the regions of interest are detected and the lesions are segmented in order to get the center point and radius of the lesions. Then, the method, namely Coherent Point Drift, is used for image registration with the pre-defined standard image. The center point and radius of the lesions after registration are obtained and the standard location feature of the image is constructed. This standard location feature can help figure out the location similarity between the image pair from the query image to each dataset image in the database. Next, the content feature of the image is extracted, including the Histogram of Oriented Gradients, the Edge Direction Histogram, the Local Binary Pattern and the Gray Level Histogram, and the image pair content similarity can be calculated using the Earth Mover's Distance. Finally, the location similarity and content similarity are fused to form the image fusion similarity, and the specified number of the most similar images can be returned according to it. In the experiment, 440 mammograms, which are from Chinese women in Northeast China, are used as the database. When fusing 40% lesion location feature similarity and 60% content feature similarity, the results have obvious advantages. At this time, precision is 0.83, recall is 0.76, comprehensive indicator is 0.79, satisfaction is 96.0%, mean is 4.2 and variance is 17.7. The results show that the precision and recall of this

  7. Cyclic Solvent Vapor Annealing for Rapid, Robust Vertical Orientation of Features in BCP Thin Films

    Science.gov (United States)

    Paradiso, Sean; Delaney, Kris; Fredrickson, Glenn

    2015-03-01

    Methods for reliably controlling block copolymer self assembly have seen much attention over the past decade as new applications for nanostructured thin films emerge in the fields of nanopatterning and lithography. While solvent assisted annealing techniques are established as flexible and simple methods for achieving long range order, solvent annealing alone exhibits a very weak thermodynamic driving force for vertically orienting domains with respect to the free surface. To address the desire for oriented features, we have investigated a cyclic solvent vapor annealing (CSVA) approach that combines the mobility benefits of solvent annealing with selective stress experienced by structures oriented parallel to the free surface as the film is repeatedly swollen with solvent and dried. Using dynamical self-consistent field theory (DSCFT) calculations, we establish the conditions under which the method significantly outperforms both static and cyclic thermal annealing and implicate the orientation selection as a consequence of the swelling/deswelling process. Our results suggest that CSVA may prove to be a potent method for the rapid formation of highly ordered, vertically oriented features in block copolymer thin films.

  8. Combining the AFLOW GIBBS and elastic libraries to efficiently and robustly screen thermomechanical properties of solids

    Science.gov (United States)

    Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    2017-06-01

    Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.

  9. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    Science.gov (United States)

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  10. Cue combination in a combined feature contrast detection and figure identification task.

    Science.gov (United States)

    Meinhardt, Günter; Persike, Malte; Mesenholl, Björn; Hagemann, Cordula

    2006-11-01

    Target figures defined by feature contrast in spatial frequency, orientation or both cues had to be detected in Gabor random fields and their shape had to be identified in a dual task paradigm. Performance improved with increasing feature contrast and was strongly correlated among both tasks. Subjects performed significantly better with combined cues than with single cues. The improvement due to cue summation was stronger than predicted by the assumption of independent feature specific mechanisms, and increased with the performance level achieved with single cues until it was limited by ceiling effects. Further, cue summation was also strongly correlated among tasks: when there was benefit due to the additional cue in feature contrast detection, there was also benefit in figure identification. For the same performance level achieved with single cues, cue summation was generally larger in figure identification than in feature contrast detection, indicating more benefit when processes of shape and surface formation are involved. Our results suggest that cue combination improves spatial form completion and figure-ground segregation in noisy environments, and therefore leads to more stable object vision.

  11. A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing

    Directory of Open Access Journals (Sweden)

    Seo Kwang-deok

    2006-01-01

    Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.

  12. Designing basin-customized combined drought indices via feature extraction

    Science.gov (United States)

    Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    The socio-economic costs of drought are progressively increasing worldwide due to the undergoing alteration of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, most of the traditional drought indexes fail in detecting critical events in highly regulated systems, which generally rely on ad-hoc formulations and cannot be generalized to different context. In this study, we contribute a novel framework for the design of a basin-customized drought index. This index represents a surrogate of the state of the basin and is computed by combining the available information about the water available in the system to reproduce a representative target variable for the drought condition of the basin (e.g., water deficit). To select the relevant variables and how to combine them, we use an advanced feature extraction algorithm called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS). The W-QEISS algorithm relies on a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables (cardinality) and optimizing relevance and redundancy of the subset. The accuracy objective is evaluated trough the calibration of a pre-defined model (i.e., an extreme learning machine) of the water deficit for each candidate subset of variables, with the index selected from the resulting solutions identifying a suitable compromise between accuracy, cardinality, relevance, and redundancy. The proposed methodology is tested in the case study of Lake Como in northern Italy, a regulated lake mainly operated for irrigation supply to four downstream agricultural districts. In the absence of an institutional drought monitoring system, we constructed the combined index using all the hydrological variables from the existing monitoring system as well as the most common drought indicators at multiple time aggregations. The soil

  13. Histomorphological features of combined forms of tuberculosis and lung cancer

    Directory of Open Access Journals (Sweden)

    Savenkov Y.F.

    2017-04-01

    Full Text Available The were studied pathological features of combined forms of tuberculosis and non-small cell lung cancer in 72 patients who underwent radical surgical resection interventions from transsternal access with mediastinal lymph node dissection, with predominance of pneumonectomy - 63.9%. There were identified three main categories of pathological changes: cancer on the background of post-tuberculosis changes, cancer in tuberculoma, cancer in the wall of the active cavity. Post-tuberculosis changes were presented by dense centers, fibrosis, cirrhosis areas, sanitized cavities with histological predominance of coarse fiber connective tissue with giant cell granulomas, with areas characterized by the appearance of the lung tissue with atypical proliferation and metaplasia of bronchopulmonary epithelium, which is a precancerous condition. This malignant tumor process was presented mainly by adenocarcinomas and squamous cell cancer and differred by polymorphic macro- and microscopic picture. Cancer in tuberculoma and fibrous wall cavity differed by pronounced activity of tuberculosis process in the form of lymphohistiocytic infiltration, foci of caseous necrosis and presence of expressed granulation layer of Pirogov-Langhans’ cells. The basic morphological causes of carcinogenesis due to secondary changes of lung tissue in patients with tuberculosis were determined. The features of metastasis of malignant tumors on the background of specific tuberculous and post-tuberculosis changes in regional lymph nodes and the interrelation between the frequency of metastatic lesions with severity of tuberculosis and post-tuberculosis changes in them were studied; this has clinical significance in the surgical treatment of patients with concomitant forms of tuberculosis and lung cancer.

  14. SU-D-BRA-05: Toward Understanding the Robustness of Radiomics Features in CT

    Energy Technology Data Exchange (ETDEWEB)

    Mackin, D; Zhang, L; Yang, J; Jones, A; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); Fave, X; Fried, D [UTH-GSBS, Houston, TX (United States); Taylor, B [Baylor College of Medicine, Houston, TX (United States); Rodriguez-Rivera, E [Houston Methodist Hospital, Houston, TX (United States); Dodge, C [Texas Children’s Hospital, Houston, TX (United States)

    2015-06-15

    Purpose: To gauge the impact of inter-scanner variability on radiomics features in computed tomography (CT). Methods: We compared the radiomics features calculated for 17 scans of the specially designed Credence Cartridge Radiomics (CCR) phantom with those calculated for 20 scans of non–small cell lung cancer (NSCLC) tumors. The scans were acquired at four medical centers using General Electric, Philips, Siemens, and Toshiba CT scanners. Each center used its own routine thoracic imaging protocol. To produce a large dynamic range of radiomics feature values, the CCR phantom has 10 cartridges comprising different materials. The features studied were derived from the neighborhood gray-tone difference matrix or image intensity histogram. To quantify the significance of the inter-scanner variability, we introduced the metric “feature noise”, which compares the ratio of inter-scanner variability and inter-patient variability in decibels, positive values indicating substantial noise. We performed hierarchical clustering based to look for dependence of the features on the scan acquisition parameters. Results: For 5 of the 10 features studied, the inter-scanner variability was larger than the inter-patient variability. Of the 10 materials in the phantom, shredded rubber seemed to produce feature values most similar to those of the NSCLC tumors. The feature busyness had the greatest feature noise (14.3 dB), whereas texture strength had the least (−14.6 dB). Hierarchical clustering indicated that the features depended in part on the scanner manufacturer, image slice thickness, and pixel size. Conclusion: The variability in the values of radiomics features calculated for CT images of a radiomics phantom can be substantial relative to the variability in the values of these features calculated for CT images of NSCLC tumors. These inter-scanner differences and their effects should be carefully considered in future radiomics studies.

  15. SU-D-BRA-05: Toward Understanding the Robustness of Radiomics Features in CT

    International Nuclear Information System (INIS)

    Mackin, D; Zhang, L; Yang, J; Jones, A; Court, L; Fave, X; Fried, D; Taylor, B; Rodriguez-Rivera, E; Dodge, C

    2015-01-01

    Purpose: To gauge the impact of inter-scanner variability on radiomics features in computed tomography (CT). Methods: We compared the radiomics features calculated for 17 scans of the specially designed Credence Cartridge Radiomics (CCR) phantom with those calculated for 20 scans of non–small cell lung cancer (NSCLC) tumors. The scans were acquired at four medical centers using General Electric, Philips, Siemens, and Toshiba CT scanners. Each center used its own routine thoracic imaging protocol. To produce a large dynamic range of radiomics feature values, the CCR phantom has 10 cartridges comprising different materials. The features studied were derived from the neighborhood gray-tone difference matrix or image intensity histogram. To quantify the significance of the inter-scanner variability, we introduced the metric “feature noise”, which compares the ratio of inter-scanner variability and inter-patient variability in decibels, positive values indicating substantial noise. We performed hierarchical clustering based to look for dependence of the features on the scan acquisition parameters. Results: For 5 of the 10 features studied, the inter-scanner variability was larger than the inter-patient variability. Of the 10 materials in the phantom, shredded rubber seemed to produce feature values most similar to those of the NSCLC tumors. The feature busyness had the greatest feature noise (14.3 dB), whereas texture strength had the least (−14.6 dB). Hierarchical clustering indicated that the features depended in part on the scanner manufacturer, image slice thickness, and pixel size. Conclusion: The variability in the values of radiomics features calculated for CT images of a radiomics phantom can be substantial relative to the variability in the values of these features calculated for CT images of NSCLC tumors. These inter-scanner differences and their effects should be carefully considered in future radiomics studies

  16. CANDU combined cycles featuring gas-turbine engines

    International Nuclear Information System (INIS)

    Vecchiarelli, J.; Choy, E.; Peryoga, Y.; Aryono, N.A.

    1998-01-01

    In the present study, a power-plant analysis is conducted to evaluate the thermodynamic merit of various CANDU combined cycles in which continuously operating gas-turbine engines are employed as a source of class IV power restoration. It is proposed to utilize gas turbines in future CANDU power plants, for sites (such as Indonesia) where natural gas or other combustible fuels are abundant. The primary objective is to eliminate the standby diesel-generators (which serve as a backup supply of class III power) since they are nonproductive and expensive. In the proposed concept, the gas turbines would: (1) normally operate on a continuous basis and (2) serve as a reliable backup supply of class IV power (the Gentilly-2 nuclear power plant uses standby gas turbines for this purpose). The backup class IV power enables the plant to operate in poison-prevent mode until normal class IV power is restored. This feature is particularly beneficial to countries with relatively small and less stable grids. Thermodynamically, the advantage of the proposed concept is twofold. Firstly, the operation of the gas-turbine engines would directly increase the net (electrical) power output and the overall thermal efficiency of a CANDU power plant. Secondly, the hot exhaust gases from the gas turbines could be employed to heat water in the CANDU Balance Of Plant (BOP) and therefore improve the thermodynamic performance of the BOP. This may be accomplished via several different combined-cycle configurations, with no impact on the current CANDU Nuclear Steam Supply System (NSSS) full-power operating conditions when each gas turbine is at maximum power. For instance, the hot exhaust gases may be employed for feedwater preheating and steam reheating and/or superheating; heat exchange could be accomplished in a heat recovery steam generator, as in conventional gas-turbine combined-cycle plants. The commercially available GateCycle power plant analysis program was applied to conduct a

  17. A Frequency-Tracking and Impedance-Matching Combined System for Robust Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Yanting Luo

    2017-01-01

    Full Text Available One of the greatest challenges to power embedded devices using magnetically coupled resonant wireless power transfer (WPT system is that the amount of power delivered to the load is very sensitive to load impedance variations. Previous adaptive impedance-matching (IM technologies have drawbacks because adding IM networks, relay coils, or other compensating components in the receiver-side will significantly increase the receiver size. In this paper, a novel frequency-tracking and impedance-matching combined system is proposed to improve the robustness of wireless power transfer for embedded devices. The characteristics of the improved WPT system are investigated theoretically based on the two-port network model. Simulation and experimental studies are carried out to validate the proposed system. The results suggest that the frequency-tracking and impedance-matching combined WPT system can quickly find the best matching points and maintain high power transmission efficiency and output power when the load impedance changes.

  18. Features of Type 2 Diabetes Mellitus in Combination with Hypothyroidism

    Directory of Open Access Journals (Sweden)

    T.Yu. Yuzvenko

    2015-11-01

    Full Text Available Background. The last decades are characterized by the considerable increase in the prevalence of endocrine disorders with the change of the structure, and first of all cases of polyendocrinopathy, the special place among which is occupied by combination of diabetes mellitus (DM and thyroid diseases. Increase in the incidence of DM type 2 associated with hypothyroidism affects the clinical course of this pathology, remains topical problem of modern medical science. The objective: to study the prevalence of hypothyroidism in patients with type 2 DM and to establish clinical features of DM type 2 in combination with hypothyroidism. Materials and methods. We have examined 179 patients with DM associated with primary hypothyroidism, including 64 patients with DM type 1 and 115 patients with type 2 DM. Comparison group consisted of 62 patients with DM without hypothyroidism (27 of them — with DM type 1, 35 — with DM type 2. Thyroid function was assessed by determining the basal concentrations of thyroid stimulating hormone and free thyroxine fraction. Results. It was found that patients with DM type 2 and hypothyroidism belonged to an older age group than patients with DM type 1 and hypothyroidism. Thus, the age of patients with DM type 1 and hypothyroidism was 35.3 ± 9.5 years, and in patients with type 2 DM and hypothyroidism — 47.6 ± 11.0 years. In all groups of patients, the percentage of women was much higher than men. The significant differences were detected in terms of the amplitude of glycemic index, namely its increase in patients with DM type 1 and hypothyroidism. When DM type 2 was combined with hypothyroidism, lipid metabolism indices were higher than in DM type 2 without thyroid disease. This confirms the effect of hypothyroidism on lipid metabolism and causes increased risk of progression of cardiovascular events at the presence of two diseases. Conclusions. Among examined patients, hypothyroidism occurred 2.4 times more often

  19. Observer-Based Robust Control of Uncertain Switched Fuzzy Systems with Combined Switching Controller

    Directory of Open Access Journals (Sweden)

    Hong Yang

    2013-01-01

    Full Text Available The observer-based robust control for a class of switched fuzzy (SF time-delay systems involving uncertainties and external disturbances is investigated in this paper. A switched fuzzy system, which differs from existing ones, is firstly employed to describe a nonlinear system. Next, a combined switching controller is proposed. The designed controller based on the observer instead of the state information integrates the advantages of both the switching controllers and the supplementary controllers but eliminates their disadvantages. The proposed controller provides good performance during the transient period, and the chattering effect is removed when the system state approaches the origin. Sufficient condition for the solvability of the robust control problem is given for the case that the state of system is not available. Since convex combination techniques are used to derive the delay-independent criteria, some subsystems are allowed to be unstable. Finally, various comparisons of the elaborated examples are conducted to demonstrate the effectiveness of the proposed control design approach.

  20. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    Directory of Open Access Journals (Sweden)

    Atiyeh Mortazavi

    2016-01-01

    Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  1. Robustness of Input features from Noisy Silhouettes in Human Pose Estimation

    DEFF Research Database (Denmark)

    Gong, Wenjuan; Fihl, Preben; Gonzàlez, Jordi

    2014-01-01

    . In this paper, we explore this problem. First, We compare performances of several image features widely used for human pose estimation and explore their performances against each other and select one with best performance. Second, iterative closest point algorithm is introduced for a new quantitative...... of silhouette samples of different noise levels and compare with the selected feature on a public dataset: Human Eva dataset....

  2. IVS Combination Center at BKG - Robust Outlier Detection and Weighting Strategies

    Science.gov (United States)

    Bachmann, S.; Lösler, M.

    2012-12-01

    Outlier detection plays an important role within the IVS combination. Even if the original data is the same for all contributing Analysis Centers (AC), the analyzed data shows differences due to analysis software characteristics. The treatment of outliers is thus a fine line between keeping data heterogeneity and elimination of real outliers. Robust outlier detection based on the Least Median Square (LMS) is used within the IVS combination. This method allows reliable outlier detection with a small number of input parameters. A similar problem arises for the weighting of the individual solutions within the combination process. The variance component estimation (VCE) is used to control the weighting factor for each AC. The Operator-Software-Impact (OSI) method takes into account that the analyzed data is strongly influenced by the software and the responsible operator. It allows to make the VCE more sensitive to the diverse input data. This method has already been set up within GNSS data analysis as well as the analysis of troposphere data. The benefit of an OSI realization within the VLBI combination and its potential in weighting factor determination has not been investigated before.

  3. Towards binary robust fast features using the comparison of pixel blocks

    International Nuclear Information System (INIS)

    Oszust, Mariusz

    2016-01-01

    Binary descriptors have become popular in many vision-based applications, as a fast and efficient replacement of floating point, heavy counterparts. They achieve a short computation time and low memory footprint due to many simplifications. Consequently, their robustness against a variety of image transformations is lowered, since they rely on pairwise pixel intensity comparisons. This observation has led to the emergence of techniques performing tests on intensities of predefined pixel regions. These approaches, despite a visible improvement in the quality of the obtained results, suffer from a long computation time, and their patch partitioning strategies produce long binary strings requiring the use of salient bit detection techniques. In this paper, a novel binary descriptor is proposed to address these shortcomings. The approach selects image patches around a keypoint, divides them into a small number of pixel blocks and performs binary tests on gradients which are determined for the blocks. The size of each patch depends on the keypoint’s scale. The robustness and distinctiveness of the descriptor are evaluated according to five demanding image benchmarks. The experimental results show that the proposed approach is faster to compute, produces a short binary string and offers a better performance than state-of-the-art binary and floating point descriptors. (paper)

  4. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....

  5. Robust features of future climate change impacts on sorghum yields in West Africa

    International Nuclear Information System (INIS)

    Sultan, B; Guan, K; Lobell, D B; Kouressy, M; Biasutti, M; Piani, C; Hammer, G L; McLean, G

    2014-01-01

    West Africa is highly vulnerable to climate hazards and better quantification and understanding of the impact of climate change on crop yields are urgently needed. Here we provide an assessment of near-term climate change impacts on sorghum yields in West Africa and account for uncertainties both in future climate scenarios and in crop models. Towards this goal, we use simulations of nine bias-corrected CMIP5 climate models and two crop models (SARRA-H and APSIM) to evaluate the robustness of projected crop yield impacts in this area. In broad agreement with the full CMIP5 ensemble, our subset of bias-corrected climate models projects a mean warming of +2.8 °C in the decades of 2031–2060 compared to a baseline of 1961–1990 and a robust change in rainfall in West Africa with less rain in the Western part of the Sahel (Senegal, South-West Mali) and more rain in Central Sahel (Burkina Faso, South-West Niger). Projected rainfall deficits are concentrated in early monsoon season in the Western part of the Sahel while positive rainfall changes are found in late monsoon season all over the Sahel, suggesting a shift in the seasonality of the monsoon. In response to such climate change, but without accounting for direct crop responses to CO 2 , mean crop yield decreases by about 16–20% and year-to-year variability increases in the Western part of the Sahel, while the eastern domain sees much milder impacts. Such differences in climate and impacts projections between the Western and Eastern parts of the Sahel are highly consistent across the climate and crop models used in this study. We investigate the robustness of impacts for different choices of cultivars, nutrient treatments, and crop responses to CO 2 . Adverse impacts on mean yield and yield variability are lowest for modern cultivars, as their short and nearly fixed growth cycle appears to be more resilient to the seasonality shift of the monsoon, thus suggesting shorter season varieties could be considered a

  6. Robust features of future climate change impacts on sorghum yields in West Africa

    Science.gov (United States)

    Sultan, B.; Guan, K.; Kouressy, M.; Biasutti, M.; Piani, C.; Hammer, G. L.; McLean, G.; Lobell, D. B.

    2014-10-01

    West Africa is highly vulnerable to climate hazards and better quantification and understanding of the impact of climate change on crop yields are urgently needed. Here we provide an assessment of near-term climate change impacts on sorghum yields in West Africa and account for uncertainties both in future climate scenarios and in crop models. Towards this goal, we use simulations of nine bias-corrected CMIP5 climate models and two crop models (SARRA-H and APSIM) to evaluate the robustness of projected crop yield impacts in this area. In broad agreement with the full CMIP5 ensemble, our subset of bias-corrected climate models projects a mean warming of +2.8 °C in the decades of 2031-2060 compared to a baseline of 1961-1990 and a robust change in rainfall in West Africa with less rain in the Western part of the Sahel (Senegal, South-West Mali) and more rain in Central Sahel (Burkina Faso, South-West Niger). Projected rainfall deficits are concentrated in early monsoon season in the Western part of the Sahel while positive rainfall changes are found in late monsoon season all over the Sahel, suggesting a shift in the seasonality of the monsoon. In response to such climate change, but without accounting for direct crop responses to CO2, mean crop yield decreases by about 16-20% and year-to-year variability increases in the Western part of the Sahel, while the eastern domain sees much milder impacts. Such differences in climate and impacts projections between the Western and Eastern parts of the Sahel are highly consistent across the climate and crop models used in this study. We investigate the robustness of impacts for different choices of cultivars, nutrient treatments, and crop responses to CO2. Adverse impacts on mean yield and yield variability are lowest for modern cultivars, as their short and nearly fixed growth cycle appears to be more resilient to the seasonality shift of the monsoon, thus suggesting shorter season varieties could be considered a potential

  7. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  8. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  9. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  10. BFROST: binary features from robust orientation segment tests accelerated on the GPU

    CSIR Research Space (South Africa)

    Cronje, J

    2011-11-01

    Full Text Available purpose parallel algo- rithms. The CUDA (Compute Unified Device Architecture) [1] framework from NVidia provides a programmable interface for GPUs. FAST (Features from Accelerated Segment Tests) [2], [3] is one of the fastest and most reliable corner... runs. Our detector detects slightly more keypoints because the decision tree of FAST does not perform a complete segment test. Timing comparisons were performed on a NVidia GeForce GTX 460 for our GPU implementation and on a Intel Core i7 2.67 GHz...

  11. Learning Combinations of Multiple Feature Representations for Music Emotion Prediction

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2015-01-01

    Music consists of several structures and patterns evolving through time which greatly influences the human decoding of higher-level cognitive aspects of music like the emotions expressed in music. For tasks, such as genre, tag and emotion recognition, these structures have often been identified...... and used as individual and non-temporal features and representations. In this work, we address the hypothesis whether using multiple temporal and non-temporal representations of different features is beneficial for modeling music structure with the aim to predict the emotions expressed in music. We test...

  12. EPIDEMIOLOGICAL AND CLINICAL FEATURES OF COMBINED RESPIRATORY INFECTIONS IN CHILDREN

    Directory of Open Access Journals (Sweden)

    V. V. Shkarin

    2017-01-01

    Full Text Available Presents a review of publications on the problem of combined respiratory infections among children. Viral-bacterial associations are registered  in a group of often ill children in 51.7%. More than half of the patients have herpesvirus infection in various combinations. The presence of a combined acute respiratory viral infection among children in the group from 2 to 6 years was noted in 44.2% of cases, among which, in addition to influenza viruses, RS-, adeno-, etc., metapneumovirus and bocavirus plays an important role.The increase in severity of acute respiratory viral infection with combined  infection, with chlamydia  and mycoplasma infection is shown. A longer and more severe course of whooping cough was observed when combined with respiratory viruses.The revealed facts of frequency of distribution of combined  respiratory infections in children, the severity and duration of their course with the development of various complications and the formation of chronic pathology dictate the need to improve diagnosis and treatment tactics of these forms of infections.

  13. Combination of surface and borehole seismic data for robust target-oriented imaging

    Science.gov (United States)

    Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees

    2016-05-01

    A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.

  14. Features of Type 2 Diabetes Mellitus in Combination with Hypothyroidism

    OpenAIRE

    T.Yu. Yuzvenko

    2015-01-01

    Background. The last decades are characterized by the considerable increase in the prevalence of endocrine disorders with the change of the structure, and first of all cases of polyendocrinopathy, the special place among which is occupied by combination of diabetes mellitus (DM) and thyroid diseases. Increase in the incidence of DM type 2 associated with hypothyroidism affects the clinical course of this pathology, remains topical problem of modern medical science. The objective: to study the...

  15. SU-F-R-31: Identification of Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induced Lung Disease

    Energy Technology Data Exchange (ETDEWEB)

    Choi, W; Riyahi, S; Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume. Methods: The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres placed at the upper lobe and lower lobe respectively in the normal lung (contralateral to tumor). 27 texture features (9 from intensity histogram, 8 from grey-level co-occurrence matrix [GLCM] and 10 from grey-level run-length matrix [GLRM]) were extracted from [normal lung-GTV]. To measure the variability of a feature F, the relative difference D=|Fref -Fsim|/Fref*100% was calculated, where Fref was for the entire normal lung and Fsim was for [normal lung-GTV]. A feature was considered as robust if the largest non-outlier (Q3+1.5*IQR) D was less than 5%, and considered as not correlated with normal lung volume when their Pearson correlation was lower than 0.50. Results: Only 11 features were robust. All first-order intensity-histogram features (mean, max, etc.) were robust, while most higher-order features (skewness, kurtosis, etc.) were unrobust. Only two of the GLCM and four of the GLRM features were robust. Larger GTV resulted greater feature variation, this was particularly true for unrobust features. All robust features were not correlated with normal lung volume while three unrobust features showed high correlation. Excessive variations were observed in two low grey-level run features and were later identified to be from one patient with local lung diseases (atelectasis) in the normal lung. There was no dependence on GTV location. Conclusion: We identified 11 robust normal lung CT texture features that can be further examined for the prediction of radiation-induced lung disease. Interestingly, low grey-level run features identified normal

  16. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    Science.gov (United States)

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  17. New Analysis Method Application in Metallographic Images through the Construction of Mosaics Via Speeded Up Robust Features and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Pedro Pedrosa Rebouças Filho

    2015-06-01

    results and expediting the decision making process. Two different methods are proposed: One using the transformed Scale Invariant Feature Transform (SIFT, and the second using features extractor Speeded Up Robust Features (SURF. Although slower, the SIFT method is more stable and has a better performance than the SURF method and can be applied to real applications. The best results were obtained using SIFT with Peak Signal-to-Noise Ratio = 61.38, Mean squared error = 0.048 and mean-structural-similarity = 0.999, and processing time of 4.91 seconds for mosaic building. The methodology proposed shows be more promissory in aiding specialists during analysis of metallographic images.

  18. Gas Classification Using Combined Features Based on a Discriminant Analysis for an Electronic Nose

    Directory of Open Access Journals (Sweden)

    Sang-Il Choi

    2016-01-01

    Full Text Available This paper proposes a gas classification method for an electronic nose (e-nose system, for which combined features that have been configured through discriminant analysis are used. First, each global feature is extracted from the entire measurement section of the data samples, while the same process is applied to the local features of the section that corresponds to the stabilization, exposure, and purge stages. The discriminative information amounts in the individual features are then measured based on the discriminant analysis, and the combined features are subsequently composed by selecting the features that have a large amount of discriminative information. Regarding a variety of volatile organic compound data, the results of the experiment show that, in a noisy environment, the proposed method exhibits classification performance that is relatively excellent compared to the other feature types.

  19. Attention in the processing of complex visual displays: detecting features and their combinations.

    Science.gov (United States)

    Farell, B

    1984-02-01

    The distinction between operations in visual processing that are parallel and preattentive and those that are serial and attentional receives both theoretical and empirical support. According to Treisman's feature-integration theory, independent features are available preattentively, but attention is required to veridically combine features into objects. Certain evidence supporting this theory is consistent with a different interpretation, which was tested in four experiments. The first experiment compared the detection of features and feature combinations while eliminating a factor that confounded earlier comparisons. The resulting priority of access to combinatorial information suggests that features and nonlocal combinations of features are not connected solely by a bottom-up hierarchical convergence. Causes of the disparity between the results of Experiment 1 and the results of previous research were investigated in three subsequent experiments. The results showed that of the two confounded factors, it was the difference in the mapping of alternatives onto responses, not the differing attentional demands of features and objects, that underlaid the results of the previous research. The present results are thus counterexamples to the feature-integration theory. Aspects of this theory are shown to be subsumed by more general principles, which are discussed in terms of attentional processes in the detection of features, objects, and stimulus alternatives.

  20. Biomedical imaging modality classification using combined visual features and textual terms.

    Science.gov (United States)

    Han, Xian-Hua; Chen, Yen-Wei

    2011-01-01

    We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  1. Exploring the potential of combining participative backcasting and exploratory scenarios for robust strategies

    NARCIS (Netherlands)

    Bruin, de Jilske Olda; Kok, Kasper; Hoogstra-Klein, Marjanke Alberttine

    2017-01-01

    Literature critiques current predictive scenario approaches applied in the forest sector. Backcasting -a means to create normative scenarios- seems promising, but sparsely used. Combining backcasting with exploratory scenarios (combined scenario approach) seems appropriate to address these

  2. Combining heterogeneous features for colonic polyp detection in CTC based on semi-definite programming

    Science.gov (United States)

    Wang, Shijun; Yao, Jianhua; Petrick, Nicholas A.; Summers, Ronald M.

    2009-02-01

    Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible combination for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features, called histogram of curvature features, are rotation, translation and scale invariant and can be treated as complementing our existing feature set. Then in order to make full use of the traditional features (defined as group A) and the new features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to identify an optimized classification kernel based on the combined set of features. We did leave-one-patient-out test on a CTC dataset which contained scans from 50 patients (with 90 6-9mm polyp detections). Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per patient rate of 7, the sensitivity on 6-9mm polyps using the combined features improved from 0.78 (Group A) and 0.73 (Group B) to 0.82 (p<=0.01).

  3. ROMANCE: A new software tool to improve data robustness and feature identification in CE-MS metabolomics.

    Science.gov (United States)

    González-Ruiz, Víctor; Gagnebin, Yoric; Drouin, Nicolas; Codesido, Santiago; Rudaz, Serge; Schappler, Julie

    2018-05-01

    The use of capillary electrophoresis coupled to mass spectrometry (CE-MS) in metabolomics remains an oddity compared to the widely adopted use of liquid chromatography. This technique is traditionally regarded as lacking the reproducibility to adequately identify metabolites by their migration times. The major reason is the variability of the velocity of the background electrolyte, mainly coming from shifts in the magnitude of the electroosmotic flow and from the suction caused by electrospray interfaces. The use of the effective electrophoretic mobility is one solution to overcome this issue as it is a characteristic feature of each compound. To date, such an approach has not been applied to metabolomics due to the complexity and size of CE-MS data obtained in such studies. In this paper, ROMANCE (RObust Metabolomic Analysis with Normalized CE) is introduced as a new software for CE-MS-based metabolomics. It allows the automated conversion of batches of CE-MS files with minimal user intervention. ROMANCE converts the x-axis of each MS file from the time into the effective mobility scale and the resulting files are already pseudo-aligned, present normalized peak areas and improved reproducibility, and can eventually follow existing metabolomic workflows. The software was developed in Scala, so it is multi-platform and computationally-efficient. It is available for download under a CC license. In this work, the versatility of ROMANCE was demonstrated by using data obtained in the same and in different laboratories, as well as its application to the analysis of human plasma samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

    Directory of Open Access Journals (Sweden)

    Xian-Hua Han

    2011-01-01

    extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.

  5. Chinese wine classification system based on micrograph using combination of shape and structure features

    Science.gov (United States)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  6. Feature study of hysterical blindness EEG based on FastICA with combined-channel information.

    Science.gov (United States)

    Qin, Xuying; Wang, Wei; Hu, Lintao; Wang, Xu; Yuan, Xiaojie

    2015-01-01

    An appropriate feature study of hysteria electroencephalograms (EEG) would provide new insights into neural mechanisms of the disease, and also make improvements in patient diagnosis and management. The objective of this paper is to provide an explanation for what causes a particular visual loss, by associating the features of hysterical blindness EEG with brain function. An idea for the novel feature extraction for hysterical blindness EEG, utilizing combined-channel information, was applied in this paper. After channels had been combined, the sliding-window-FastICA was applied to process the combined normal EEG and hysteria EEG, respectively. Kurtosis features were calculated from the processed signals. As the comparison feature, the power spectral density of normal and hysteria EEG were computed. According to the feature analysis results, a region of brain dysfunction was located at the occipital lobe, O1 and O2. Furthermore, new abnormality was found at the parietal lobe, C3, C4, P3, and P4, that provided us with a new perspective for understanding hysterical blindness. Indicated by the kurtosis results which were consistent with brain function and the clinical diagnosis, our method was found to be a useful tool to capture features in hysterical blindness EEG.

  7. Classification of Carotid Plaque Echogenicity by Combining Texture Features and Morphologic Characteristics.

    Science.gov (United States)

    Huang, Xiaowei; Zhang, Yanling; Qian, Ming; Meng, Long; Xiao, Yang; Niu, Lili; Zheng, Rongqin; Zheng, Hairong

    2016-10-01

    Anechoic carotid plaques on sonography have been used to predict future cardiovascular or cerebrovascular events. The purpose of this study was to investigate whether carotid plaque echogenicity could be assessed objectively by combining texture features extracted by MaZda software (Institute of Electronics, Technical University of Lodz, Lodz, Poland) and morphologic characteristics, which may provide a promising method for early prediction of acute cardiovascular disease. A total of 268 plaque images were collected from 136 volunteers and classified into 85 hyperechoic, 83 intermediate, and 100 anechoic plaques. About 300 texture features were extracted from histogram, absolute gradient, run-length matrix, gray-level co-occurrence matrix, autoregressive model, and wavelet transform algorithms by MaZda. The morphologic characteristics, including degree of stenosis, maximum plaque intima-media thickness, and maximum plaque length, were measured by B-mode sonography. Statistically significant features were selected by analysis of covariance. The most discriminative features were obtained from statistically significant features by linear discriminant analysis. The K-nearest neighbor classifier was used to classify plaque echogenicity based on statistically significant and most discriminative features. A total of 30 statistically significant features were selected among the plaques, and 2 most discriminative features were obtained from the statistically significant features. The classification accuracy rates for 3 types of plaques based on statistically significant and most discriminative features were 72.03% (κ= 0.571; P MaZda and morphologic characteristics.

  8. Improved medical image modality classification using a combination of visual and textual features.

    Science.gov (United States)

    Dimitrovski, Ivica; Kocev, Dragi; Kitanovski, Ivan; Loskovska, Suzana; Džeroski, Sašo

    2015-01-01

    In this paper, we present the approach that we applied to the medical modality classification tasks at the ImageCLEF evaluation forum. More specifically, we used the modality classification databases from the ImageCLEF competitions in 2011, 2012 and 2013, described by four visual and one textual types of features, and combinations thereof. We used local binary patterns, color and edge directivity descriptors, fuzzy color and texture histogram and scale-invariant feature transform (and its variant opponentSIFT) as visual features and the standard bag-of-words textual representation coupled with TF-IDF weighting. The results from the extensive experimental evaluation identify the SIFT and opponentSIFT features as the best performing features for modality classification. Next, the low-level fusion of the visual features improves the predictive performance of the classifiers. This is because the different features are able to capture different aspects of an image, their combination offering a more complete representation of the visual content in an image. Moreover, adding textual features further increases the predictive performance. Finally, the results obtained with our approach are the best results reported on these databases so far. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Comparison of HMM experts with MLP experts in the Full Combination Multi-Band Approach to Robust ASR

    OpenAIRE

    Hagen, Astrid; Morris, Andrew

    2000-01-01

    In this paper we apply the Full Combination (FC) multi-band approach, which has originally been introduced in the framework of posterior-based HMM/ANN (Hidden Markov Model/Artificial Neural Network) hybrid systems, to systems in which the ANN (or Multilayer Perceptron (MLP)) is itself replaced by a Multi Gaussian HMM (MGM). Both systems represent the most widely used statistical models for robust ASR (automatic speech recognition). It is shown how the FC formula for the likelihood--based MGMs...

  10. A robust control strategy for mitigating renewable energy fluctuations in a real hybrid power system combined with SMES

    Science.gov (United States)

    Magdy, G.; Shabib, G.; Elbaset, Adel A.; Qudaih, Yaser; Mitani, Yasunori

    2018-05-01

    Utilizing Renewable Energy Sources (RESs) is attracting great attention as a solution to future energy shortages. However, the irregular nature of RESs and random load deviations cause a large frequency and voltage fluctuations. Therefore, in order to benefit from a maximum capacity of the RESs, a robust mitigation strategy of power fluctuations from RESs must be applied. Hence, this paper proposes a design of Load Frequency Control (LFC) coordinated with Superconducting Magnetic Energy Storage (SMES) technology (i.e., an auxiliary LFC), using an optimal PID controller-based Particle Swarm Optimization (PSO) in the Egyptian Power System (EPS) considering high penetration of Photovoltaics (PV) power generation. Thus, from the perspective of LFC, the robust control strategy is proposed to maintain the nominal system frequency and mitigating the power fluctuations from RESs against all disturbances sources for the EPS with the multi-source environment. The EPS is decomposed into three dynamics subsystems, which are non-reheat, reheat and hydro power plants taking into consideration the system nonlinearity. The results by nonlinear simulation Matlab/Simulink for the EPS combined with SMES system considering PV solar power approves that, the proposed control strategy achieves a robust stability by reducing transient time, minimizing the frequency deviations, maintaining the system frequency, preventing conventional generators from exceeding their power ratings during load disturbances, and mitigating the power fluctuations from the RESs.

  11. Atorvastatin effect evaluation based on feature combination of three-dimension ultrasound images

    Science.gov (United States)

    Luo, Yongkang; Ding, Mingyue

    2016-03-01

    In the past decades, stroke has become the worldwide common cause of death and disability. It is well known that ischemic stroke is mainly caused by carotid atherosclerosis. As an inexpensive, convenient and fast means of detection, ultrasound technology is applied widely in the prevention and treatment of carotid atherosclerosis. Recently, many studies have focused on how to quantitatively evaluate local arterial effects of medicine treatment for carotid diseases. So the evaluation method based on feature combination was proposed to detect potential changes in the carotid arteries after atorvastatin treatment. And the support vector machine (SVM) and 10-fold cross-validation protocol were utilized on a database of 5533 carotid ultrasound images of 38 patients (17 atorvastatin groups and 21 placebo groups) at baseline and after 3 months of the treatment. With combination optimization of many features (including morphological and texture features), the evaluation results of single feature and different combined features were compared. The experimental results showed that the performance of single feature is poor and the best feature combination have good recognition ability, with the accuracy 92.81%, sensitivity 80.95%, specificity 95.52%, positive predictive value 80.47%, negative predictive value 95.65%, Matthew's correlation coefficient 76.27%, and Youden's index 76.48%. And the receiver operating characteristic (ROC) curve was also performed well with 0.9663 of the area under the ROC curve (AUC), which is better than all the features with 0.9423 of the AUC. Thus, it is proved that this novel method can reliably and accurately evaluate the effect of atorvastatin treatment.

  12. Combining Multiple Features for Text-Independent Writer Identification and Verification

    OpenAIRE

    Bulacu , Marius; Schomaker , Lambert

    2006-01-01

    http://www.suvisoft.com; In recent years, we proposed a number of new and very effective features for automatic writer identification and verification. They are probability distribution functions (PDFs) extracted from the handwriting images and characterize writer individuality independently of the textual content of the written samples. In this paper, we perform an extensive analysis of feature combinations. In our fusion scheme, the final unique distance between two handwritten samples is c...

  13. Improving causal inference with a doubly robust estimator that combines propensity score stratification and weighting.

    Science.gov (United States)

    Linden, Ariel

    2017-08-01

    When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.

  14. Classical gas: Hearty prices, robust demand combine to pump breezy optimism through 2005 forecasts

    International Nuclear Information System (INIS)

    Lunan, D.

    2005-01-01

    The outlook for natural gas in 2005 is said to be a watershed year, with a lengthy list of developments that could have significant effect on the industry for many years to come. In light of continuing high demand and static supply prospects, prices will have to continue to be high in order to ensure the necessary infrastructure investments to keep gas flowing from multiple sources to the consumer. It is predicted that against the backdrop of robust prices several supply initiatives will continue to advance rapidly in 2005, such as the $7 billion Mackenzie Gas Project on which public hearings are expected to start this summer, along with regulatory clarity about the $20 billion Alaska Highway Natural Gas Pipeline Project to move North Slope gas to southern markets. Drilling of new gas wells will continue to approach or even surpass 18,000 new wells, with an increasing number of these being coal-bed methane wells. Despite high level drilling activity, supply is expected to grow only about 400 MMcf per day. Greater supply increments are expected through continued LNG terminal development, although plans for new LNG terminal development have been met with stiff resistance from local residents both in Canada and the United States. Imports of liquefied natural gas into the United States slowed dramatically in 2004 under the severe short-term downward pressure on natural gas prices, nevertheless, these imports are expected to rebound to new record highs in 2005. Capacity is expected to climb from about 2.55 Bcf per day in 2004 to as much as 6.4 Bcf per day by late 2007. At least one Canadian import facility, Anadarko's one Bcf per day Bear Head terminal on Nova Scotia's Strait of Canso, is expected to become operational by late 2007 or early 2008. 6 photos

  15. Robust Management of Combined Heat and Power Systems via Linear Decision Rules

    DEFF Research Database (Denmark)

    Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik

    2014-01-01

    The heat and power outputs of Combined Heat and Power (CHP) units are jointly constrained. Hence, the optimal management of systems including CHP units is a multicommodity optimization problem. Problems of this type are stochastic, owing to the uncertainty inherent both in the demand for heat and...... linear decision rules to guarantee both tractability and a correct representation of the dynamic aspects of the problem. Numerical results from an illustrative example confirm the value of the proposed approach....

  16. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  17. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  18. Online 3D Ear Recognition by Combining Global and Local Features.

    Science.gov (United States)

    Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David

    2016-01-01

    The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.

  19. Online 3D Ear Recognition by Combining Global and Local Features.

    Directory of Open Access Journals (Sweden)

    Yahui Liu

    Full Text Available The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles and local feature class (points, lines, and areas. These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.

  20. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  1. Discriminative region extraction and feature selection based on the combination of SURF and saliency

    Science.gov (United States)

    Deng, Li; Wang, Chunhong; Rao, Changhui

    2011-08-01

    The objective of this paper is to provide a possible optimization on salient region algorithm, which is extensively used in recognizing and learning object categories. Salient region algorithm owns the superiority of intra-class tolerance, global score of features and automatically prominent scale selection under certain range. However, the major limitation behaves on performance, and that is what we attempt to improve. By reducing the number of pixels involved in saliency calculation, it can be accelerated. We use interest points detected by fast-Hessian, the detector of SURF, as the candidate feature for saliency operation, rather than the whole set in image. This implementation is thereby called Saliency based Optimization over SURF (SOSU for short). Experiment shows that bringing in of such a fast detector significantly speeds up the algorithm. Meanwhile, Robustness of intra-class diversity ensures object recognition accuracy.

  2. A robust response to combination immune checkpoint inhibitor therapy in HPV-related small cell cancer: a case report.

    Science.gov (United States)

    Ho, Won Jin; Rooper, Lisa; Sagorsky, Sarah; Kang, Hyunseok

    2018-05-09

    Human papillomavirus-related small cell carcinoma of the head and neck is an extremely rare, aggressive subtype with poor outcomes. Therapeutic options are limited and are largely adopted from small cell lung cancer treatment paradigms. This report describes a 69-year old male who was diagnosed of HPV-related oropharyngeal cancer with mixed small cell and squamous cell pathology which was clinically aggressive and progressed through multimodal platinum-based therapies. Upon manifestation of worsening metastatic disease, the patient was initiated on a combination of ipilimumab and nivolumab. Within 2 months of starting immunotherapy, a robust partial response was observed. During the treatment course, the patient developed immune-related adverse effects including new diabetes mellitus, colitis, and hypothyroidism. The disease-specific survival was 26 months. Combination immunotherapy may be an attractive option for HPV-related small cell head and neck cancers resistant to other treatment modalities and thus warrants further evaluation.

  3. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  4. Combining fine texture and coarse color features for color texture classification

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-11-01

    Color texture classification plays an important role in computer vision applications because texture and color are two fundamental visual features. To classify the color texture via extracting discriminative color texture features in real time, we present an approach of combining the fine texture and coarse color features for color texture classification. First, the input image is transformed from RGB to HSV color space to separate texture and color information. Second, the scale-selective completed local binary count (CLBC) algorithm is introduced to extract the fine texture feature from the V component in HSV color space. Third, both H and S components are quantized at an optimal coarse level. Furthermore, the joint histogram of H and S components is calculated, which is considered as the coarse color feature. Finally, the fine texture and coarse color features are combined as the final descriptor and the nearest subspace classifier is used for classification. Experimental results on CUReT, KTH-TIPS, and New-BarkTex databases demonstrate that the proposed method achieves state-of-the-art classification performance. Moreover, the proposed method is fast enough for real-time applications.

  5. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  6. Combining Semantic and Acoustic Features for Valence and Arousal Recognition in Speech

    DEFF Research Database (Denmark)

    Karadogan, Seliz; Larsen, Jan

    2012-01-01

    The recognition of affect in speech has attracted a lot of interest recently; especially in the area of cognitive and computer sciences. Most of the previous studies focused on the recognition of basic emotions (such as happiness, sadness and anger) using categorical approach. Recently, the focus...... has been shifting towards dimensional affect recognition based on the idea that emotional states are not independent from one another but related in a systematic manner. In this paper, we design a continuous dimensional speech affect recognition model that combines acoustic and semantic features. We...... show that combining semantic and acoustic information for dimensional speech recognition improves the results. Moreover, we show that valence is better estimated using semantic features while arousal is better estimated using acoustic features....

  7. Robust Neutrino Constraints by Combining Low Redshift Observations with the CMB

    CERN Document Server

    Reid, Beth A; Jimenez, Raul; Mena, Olga

    2010-01-01

    We illustrate how recently improved low-redshift cosmological measurements can tighten constraints on neutrino properties. In particular we examine the impact of the assumed cosmological model on the constraints. We first consider the new HST H0 = 74.2 +/- 3.6 measurement by Riess et al. (2009) and the sigma8*(Omegam/0.25)^0.41 = 0.832 +/- 0.033 constraint from Rozo et al. (2009) derived from the SDSS maxBCG Cluster Catalog. In a Lambda CDM model and when combined with WMAP5 constraints, these low-redshift measurements constrain sum mnu<0.4 eV at the 95% confidence level. This bound does not relax when allowing for the running of the spectral index or for primordial tensor perturbations. When adding also Supernovae and BAO constraints, we obtain a 95% upper limit of sum mnu<0.3 eV. We test the sensitivity of the neutrino mass constraint to the assumed expansion history by both allowing a dark energy equation of state parameter w to vary, and by studying a model with coupling between dark energy and dark...

  8. Arrhythmia recognition and classification using combined linear and nonlinear features of ECG signals.

    Science.gov (United States)

    Elhaj, Fatin A; Salim, Naomie; Harris, Arief R; Swee, Tan Tian; Ahmed, Taqwa

    2016-04-01

    Arrhythmia is a cardiac condition caused by abnormal electrical activity of the heart, and an electrocardiogram (ECG) is the non-invasive method used to detect arrhythmias or heart abnormalities. Due to the presence of noise, the non-stationary nature of the ECG signal (i.e. the changing morphology of the ECG signal with respect to time) and the irregularity of the heartbeat, physicians face difficulties in the diagnosis of arrhythmias. The computer-aided analysis of ECG results assists physicians to detect cardiovascular diseases. The development of many existing arrhythmia systems has depended on the findings from linear experiments on ECG data which achieve high performance on noise-free data. However, nonlinear experiments characterize the ECG signal more effectively sense, extract hidden information in the ECG signal, and achieve good performance under noisy conditions. This paper investigates the representation ability of linear and nonlinear features and proposes a combination of such features in order to improve the classification of ECG data. In this study, five types of beat classes of arrhythmia as recommended by the Association for Advancement of Medical Instrumentation are analyzed: non-ectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable and paced beats (U). The characterization ability of nonlinear features such as high order statistics and cumulants and nonlinear feature reduction methods such as independent component analysis are combined with linear features, namely, the principal component analysis of discrete wavelet transform coefficients. The features are tested for their ability to differentiate different classes of data using different classifiers, namely, the support vector machine and neural network methods with tenfold cross-validation. Our proposed method is able to classify the N, S, V, F and U arrhythmia classes with high accuracy (98.91%) using a combined support

  9. A Combination of Central Pattern Generator-based and Reflex-based Neural Networks for Dynamic, Adaptive, Robust Bipedal Locomotion

    DEFF Research Database (Denmark)

    Di Canio, Giuliano; Larsen, Jørgen Christian; Wörgötter, Florentin

    2016-01-01

    Robotic systems inspired from humans have always been lightening up the curiosity of engineers and scientists. Of many challenges, human locomotion is a very difficult one where a number of different systems needs to interact in order to generate a correct and balanced pattern. To simulate...... the interaction of these systems, implementations with reflexbased or central pattern generator (CPG)-based controllers have been tested on bipedal robot systems. In this paper we will combine the two controller types, into a controller that works with both reflex and CPG signals. We use a reflex-based neural...... network to generate basic walking patterns of a dynamic bipedal walking robot (DACBOT) and then a CPG-based neural network to ensure robust walking behavior...

  10. Detecting Structural Features in Metallic Glass via Synchrotron Radiation Experiments Combined with Simulations

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available Revealing the essential structural features of metallic glasses (MGs will enhance the understanding of glass-forming mechanisms. In this work, a feasible scheme is provided where we performed the state-of-the-art synchrotron-radiation based experiments combined with simulations to investigate the microstructures of ZrCu amorphous compositions. It is revealed that in order to stabilize the amorphous state and optimize the topological and chemical distribution, besides the icosahedral or icosahedral-like clusters, other types of clusters also participate in the formation of the microstructure in MGs. This cluster-level co-existing feature may be popular in this class of glassy materials.

  11. Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information

    NARCIS (Netherlands)

    Suh, Hyun K.; Hofstee, Jan Willem; IJsselmuiden, Joris; Henten, van Eldert J.

    2018-01-01

    One of the most important steps in vision-based weed detection systems is the classification of weeds growing amongst crops. In the EU SmartBot project it was required to effectively control more than 95% of volunteer potatoes and ensure less than 5% of damage of sugar beet. Classification features

  12. Phylogeny Inference of Closely Related Bacterial Genomes: Combining the Features of Both Overlapping Genes and Collinear Genomic Regions

    Science.gov (United States)

    Zhang, Yan-Cong; Lin, Kui

    2015-01-01

    Overlapping genes (OGs) represent one type of widespread genomic feature in bacterial genomes and have been used as rare genomic markers in phylogeny inference of closely related bacterial species. However, the inference may experience a decrease in performance for phylogenomic analysis of too closely or too distantly related genomes. Another drawback of OGs as phylogenetic markers is that they usually take little account of the effects of genomic rearrangement on the similarity estimation, such as intra-chromosome/genome translocations, horizontal gene transfer, and gene losses. To explore such effects on the accuracy of phylogeny reconstruction, we combine phylogenetic signals of OGs with collinear genomic regions, here called locally collinear blocks (LCBs). By putting these together, we refine our previous metric of pairwise similarity between two closely related bacterial genomes. As a case study, we used this new method to reconstruct the phylogenies of 88 Enterobacteriale genomes of the class Gammaproteobacteria. Our results demonstrated that the topological accuracy of the inferred phylogeny was improved when both OGs and LCBs were simultaneously considered, suggesting that combining these two phylogenetic markers may reduce, to some extent, the influence of gene loss on phylogeny inference. Such phylogenomic studies, we believe, will help us to explore a more effective approach to increasing the robustness of phylogeny reconstruction of closely related bacterial organisms. PMID:26715828

  13. BLINCK?A diagnostic algorithm for skin cancer diagnosis combining clinical features with dermatoscopy findings

    OpenAIRE

    Bourne, Peter; Rosendahl, Cliff; Keir, Jeff; Cameron, Alan

    2012-01-01

    Background: Deciding whether a skin lesion requires biopsy to exclude skin cancer is often challenging for primary care clinicians in Australia. There are several published algorithms designed to assist with the diagnosis of skin cancer but apart from the clinical ABCD rule, these algorithms only evaluate the dermatoscopic features of a lesion. Objectives: The BLINCK algorithm explores the effect of combining clinical history and examination with fundamental dermatoscopic assessment in primar...

  14. GANN: Genetic algorithm neural networks for the detection of conserved combinations of features in DNA

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2005-02-01

    Full Text Available Abstract Background The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence- and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results GANN (available at http://bioinformatics.org.au/gann is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.

  15. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination.

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  16. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    Objective. We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. Approach. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Main results. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). Significance. We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  17. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    Science.gov (United States)

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  18. Finding Combination of Features from Promoter Regions for Ovarian Cancer-related Gene Group Classification

    KAUST Repository

    Olayan, Rawan S.

    2012-12-01

    In classification problems, it is always important to use the suitable combination of features that will be employed by classifiers. Generating the right combination of features usually results in good classifiers. In the situation when the problem is not well understood, data items are usually described by many features in the hope that some of these may be the relevant or most relevant ones. In this study, we focus on one such problem related to genes implicated in ovarian cancer (OC). We try to recognize two important OC-related gene groups: oncogenes, which support the development and progression of OC, and oncosuppressors, which oppose such tendencies. For this, we use the properties of promoters of these genes. We identified potential “regulatory features” that characterize OC-related oncogenes and oncosuppressors promoters. In our study, we used 211 oncogenes and 39 oncosuppressors. For these, we identified 538 characteristic sequence motifs from their promoters. Promoters are annotated by these motifs and derived feature vectors used to develop classification models. We made a comparison of a number of classification models in their ability to distinguish oncogenes from oncosuppressors. Based on 10-fold cross-validation, the resultant model was able to separate the two classes with sensitivity of 96% and specificity of 100% with the complete set of features. Moreover, we developed another recognition model where we attempted to distinguish oncogenes and oncosuppressors as one group from other OC-related genes. That model achieved accuracy of 82%. We believe that the results of this study will help in discovering other OC-related oncogenes and oncosuppressors not identified as yet.

  19. Finding Combination of Features from Promoter Regions for Ovarian Cancer-related Gene Group Classification

    KAUST Repository

    Olayan, Rawan S.

    2012-01-01

    In classification problems, it is always important to use the suitable combination of features that will be employed by classifiers. Generating the right combination of features usually results in good classifiers. In the situation when the problem is not well understood, data items are usually described by many features in the hope that some of these may be the relevant or most relevant ones. In this study, we focus on one such problem related to genes implicated in ovarian cancer (OC). We try to recognize two important OC-related gene groups: oncogenes, which support the development and progression of OC, and oncosuppressors, which oppose such tendencies. For this, we use the properties of promoters of these genes. We identified potential “regulatory features” that characterize OC-related oncogenes and oncosuppressors promoters. In our study, we used 211 oncogenes and 39 oncosuppressors. For these, we identified 538 characteristic sequence motifs from their promoters. Promoters are annotated by these motifs and derived feature vectors used to develop classification models. We made a comparison of a number of classification models in their ability to distinguish oncogenes from oncosuppressors. Based on 10-fold cross-validation, the resultant model was able to separate the two classes with sensitivity of 96% and specificity of 100% with the complete set of features. Moreover, we developed another recognition model where we attempted to distinguish oncogenes and oncosuppressors as one group from other OC-related genes. That model achieved accuracy of 82%. We believe that the results of this study will help in discovering other OC-related oncogenes and oncosuppressors not identified as yet.

  20. On-Line Fault Detection in Wind Turbine Transmission System using Adaptive Filter and Robust Statistical Features

    Directory of Open Access Journals (Sweden)

    Mark Frogley

    2013-01-01

    Full Text Available To reduce the maintenance cost, avoid catastrophic failure, and improve the wind transmission system reliability, online condition monitoring system is critical important. In the real applications, many rotating mechanical faults, such as bearing surface defect, gear tooth crack, chipped gear tooth and so on generate impulsive signals. When there are these types of faults developing inside rotating machinery, each time the rotating components pass over the damage point, an impact force could be generated. The impact force will cause a ringing of the support structure at the structural natural frequency. By effectively detecting those periodic impulse signals, one group of rotating machine faults could be detected and diagnosed. However, in real wind turbine operations, impulsive fault signals are usually relatively weak to the background noise and vibration signals generated from other healthy components, such as shaft, blades, gears and so on. Moreover, wind turbine transmission systems work under dynamic operating conditions. This will further increase the difficulties in fault detection and diagnostics. Therefore, developing advanced signal processing methods to enhance the impulsive signals is in great needs.In this paper, an adaptive filtering technique will be applied for enhancing the fault impulse signals-to-noise ratio in wind turbine gear transmission systems. Multiple statistical features designed to quantify the impulsive signals of the processed signal are extracted for bearing fault detection. The multiple dimensional features are then transformed into one dimensional feature. A minimum error rate classifier will be designed based on the compressed feature to identify the gear transmission system with defect. Real wind turbine vibration signals will be used to demonstrate the effectiveness of the presented methodology.

  1. Features of the magnetic field of a rectangular combined function bending magnet

    International Nuclear Information System (INIS)

    Hwang, C.S.; National Chiao Tung Univ., Hsinchu; Chang, C.H.; Hwang, G.J.; Uen, T.M.; Tseng, P.K.; National Taiwan Univ., Taipei

    1996-01-01

    Magnetic field features of the combined function bending magnet with dipole and quadrupole field components are essential for the successful operation of the electron beam trajectory. These fields also dominate the photon beam quality. The vertical magnetic field B y (x,y) calculation is performed by a computer code MAGNET at the magnet center (s = 0). Those results are compared with the 2-D field measurement by the Hall probe mapping system. Also detailed survey has been made of the harmonic field strength and the main features of the fundamental integrated strength, effective length, magnetic symmetry, tilt of the pole face, offset of the field center and the fringe field. The end shims that compensate for the strong end negative sextupole field to increase the good field region for the entire integrated strength are discussed. An important physical feature of this combined function bending magnet is the constant ratio of dipole and quadrupole strength ∫Bds/∫Gds which is expressed as a function of excitation current in the energy range 0.6 to 1.5 GeV

  2. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    Science.gov (United States)

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,pmusic induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01). Copyright © 2015 Elsevier Inc. All rights reserved.

  3. A systematic methodology for the robust quantification of energy efficiency at wastewater treatment plants featuring Data Envelopment Analysis.

    Science.gov (United States)

    Longo, S; Hospido, A; Lema, J M; Mauricio-Iglesias, M

    2018-05-10

    This article examines the potential benefits of using Data Envelopment Analysis (DEA) for conducting energy-efficiency assessment of wastewater treatment plants (WWTPs). WWTPs are characteristically heterogeneous (in size, technology, climate, function …) which limits the correct application of DEA. This paper proposes and describes the Robust Energy Efficiency DEA (REED) in its various stages, a systematic state-of-the-art methodology aimed at including exogenous variables in nonparametric frontier models and especially designed for WWTP operation. In particular, the methodology systematizes the modelling process by presenting an integrated framework for selecting the correct variables and appropriate models, possibly tackling the effect of exogenous factors. As a result, the application of REED improves the quality of the efficiency estimates and hence the significance of benchmarking. For the reader's convenience, this article is presented as a step-by-step guideline to guide the user in the determination of WWTPs energy efficiency from beginning to end. The application and benefits of the developed methodology are demonstrated by a case study related to the comparison of the energy efficiency of a set of 399 WWTPs operating in different countries and under heterogeneous environmental conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Features of use of combinational circuits play in attack during volleyball matches

    Directory of Open Access Journals (Sweden)

    B. O. Artemenko

    2013-12-01

    Full Text Available Purpose : To characterize tactical formation game action in the attack and the frequency of execution of power and tactical (planning innings volleyball players of different skill levels. Material and methods: The analyzed video recordings of volleyball matches with the strongest teams of the world: Brazil, Russia, USA, Cuba, Italy, the leading volleyball championship teams of Russia and Ukraine. The features of tactical formation game. Results: It was established that the binders players strongest teams and club teams in the world is used in the organization of attack more variety of tactical combinations. So almost equally, the following combinations: Rise, Pipe and Zone (13.4 % 12.6 % 11.5%. Ukrainian binders players prefer only combinations Rise - 23%; Pipe - 2%; Zone - 2.5%, respectively. Distribution innings for planning and power for Ukrainian teams: 59.7 % versus 40.3% and 27.5 % versus 72.5 %, respectively, the strongest national teams in the world. Conclusions: The features of tactical formation attack suggest the best technical - tactical training players tie the strongest teams in the world and the whole team. Also leading the team to the world is much more likely to use the power supply, which indicates the nature of the power of world volleyball.

  5. Internal respiratory surrogate in multislice 4D CT using a combination of Fourier transform and anatomical features

    International Nuclear Information System (INIS)

    Hui, Cheukkai; Suh, Yelin; Robertson, Daniel; Beddar, Sam; Pan, Tinsu; Das, Prajnan; Crane, Christopher H.

    2015-01-01

    Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signals is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt ins ) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt ins were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt ins was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt ins greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective sorting of 4D CT data

  6. Internal respiratory surrogate in multislice 4D CT using a combination of Fourier transform and anatomical features

    Energy Technology Data Exchange (ETDEWEB)

    Hui, Cheukkai; Suh, Yelin [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Robertson, Daniel; Beddar, Sam, E-mail: abeddar@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and Department of Radiation Physics, The University of Texas Graduate School of Biomedical Sciences, Houston, Texas 77030 (United States); Pan, Tinsu [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and Department of Imaging Physics, The University of Texas Graduate School of Biomedical Sciences, Houston, Texas 77030 (United States); Das, Prajnan; Crane, Christopher H. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2015-07-15

    Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signals is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt{sub ins}) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt{sub ins} were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt{sub ins} was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt{sub ins} greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective

  7. Average combination difference morphological filters for fault feature extraction of bearing

    Science.gov (United States)

    Lv, Jingxiang; Yu, Jianbo

    2018-02-01

    In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.

  8. Memory-based detection of rare sound feature combinations in anesthetized rats.

    Science.gov (United States)

    Astikainen, Piia; Ruusuvirta, Timo; Wikgren, Jan; Penttonen, Markku

    2006-10-02

    It is unclear whether the ability of the brain to discriminate rare from frequently repeated combinations of sound features is limited to the normal sleep/wake cycle. We recorded epidural auditory event-related potentials in urethane-anesthetized rats presented with rare tones ('deviants') interspersed with frequently repeated ones ('standards'). Deviants differed from standards either in frequency alone or in frequency combined with intensity. In both cases, deviants elicited event-related potentials exceeding in amplitude event-related potentials to standards between 76 and 108 ms from the stimulus onset, suggesting the independence of the underlying integrative and memory-based change detection mechanisms of the brain from the normal sleep/wake cycle. The relations of these event-related potentials to mismatch negativity and N1 in humans are addressed.

  9. Feature combination analysis in smart grid based using SOM for Sudan national grid

    Science.gov (United States)

    Bohari, Z. H.; Yusof, M. A. M.; Jali, M. H.; Sulaima, M. F.; Nasir, M. N. M.

    2015-12-01

    In the investigation of power grid security, the cascading failure in multicontingency situations has been a test because of its topological unpredictability and computational expense. Both system investigations and burden positioning routines have their limits. In this project, in view of sorting toward Self Organizing Maps (SOM), incorporated methodology consolidating spatial feature (distance)-based grouping with electrical attributes (load) to evaluate the vulnerability and cascading impact of various part sets in the force lattice. Utilizing the grouping result from SOM, sets of overwhelming stacked beginning victimized people to perform assault conspires and asses the consequent falling impact of their failures, and this SOM-based approach viably distinguishes the more powerless sets of substations than those from the conventional burden positioning and other bunching strategies. The robustness of power grids is a central topic in the design of the so called "smart grid". In this paper, to analyze the measures of importance of the nodes in a power grid under cascading failure. With these efforts, we can distinguish the most vulnerable nodes and protect them, improving the safety of the power grid. Also we can measure if a structure is proper for power grids.

  10. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  11. Driver Fatigue Detection System Using Electroencephalography Signals Based on Combined Entropy Features

    Directory of Open Access Journals (Sweden)

    Zhendong Mu

    2017-02-01

    Full Text Available Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.

  12. A Local Texture-Based Superpixel Feature Coding for Saliency Detection Combined with Global Saliency

    Directory of Open Access Journals (Sweden)

    Bingfei Nan

    2015-12-01

    Full Text Available Because saliency can be used as the prior knowledge of image content, saliency detection has been an active research area in image segmentation, object detection, image semantic understanding and other relevant image-based applications. In the case of saliency detection from cluster scenes, the salient object/region detected needs to not only be distinguished clearly from the background, but, preferably, to also be informative in terms of complete contour and local texture details to facilitate the successive processing. In this paper, a Local Texture-based Region Sparse Histogram (LTRSH model is proposed for saliency detection from cluster scenes. This model uses a combination of local texture patterns and color distribution as well as contour information to encode the superpixels to characterize the local feature of image for region contrast computing. Combining the region contrast as computed with the global saliency probability, a full-resolution salient map, in which the salient object/region detected adheres more closely to its inherent feature, is obtained on the bases of the corresponding high-level saliency spatial distribution as well as on the pixel-level saliency enhancement. Quantitative comparisons with five state-of-the-art saliency detection methods on benchmark datasets are carried out, and the comparative results show that the method we propose improves the detection performance in terms of corresponding measurements.

  13. Filtering high-throughput protein-protein interaction data using a combination of genomic features

    Directory of Open Access Journals (Sweden)

    Patil Ashwini

    2005-04-01

    Full Text Available Abstract Background Protein-protein interaction data used in the creation or prediction of molecular networks is usually obtained from large scale or high-throughput experiments. This experimental data is liable to contain a large number of spurious interactions. Hence, there is a need to validate the interactions and filter out the incorrect data before using them in prediction studies. Results In this study, we use a combination of 3 genomic features – structurally known interacting Pfam domains, Gene Ontology annotations and sequence homology – as a means to assign reliability to the protein-protein interactions in Saccharomyces cerevisiae determined by high-throughput experiments. Using Bayesian network approaches, we show that protein-protein interactions from high-throughput data supported by one or more genomic features have a higher likelihood ratio and hence are more likely to be real interactions. Our method has a high sensitivity (90% and good specificity (63%. We show that 56% of the interactions from high-throughput experiments in Saccharomyces cerevisiae have high reliability. We use the method to estimate the number of true interactions in the high-throughput protein-protein interaction data sets in Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens to be 27%, 18% and 68% respectively. Our results are available for searching and downloading at http://helix.protein.osaka-u.ac.jp/htp/. Conclusion A combination of genomic features that include sequence, structure and annotation information is a good predictor of true interactions in large and noisy high-throughput data sets. The method has a very high sensitivity and good specificity and can be used to assign a likelihood ratio, corresponding to the reliability, to each interaction.

  14. SU-F-R-38: Impact of Smoothing and Noise On Robustness of CBCT Textural Features for Prediction of Response to Radiotherapy Treatment of Head and Neck Cancers

    Energy Technology Data Exchange (ETDEWEB)

    Bagher-Ebadian, H; Chetty, I; Liu, C; Movsas, B; Siddiqui, F [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: To examine the impact of image smoothing and noise on the robustness of textural information extracted from CBCT images for prediction of radiotherapy response for patients with head/neck (H/N) cancers. Methods: CBCT image datasets for 14 patients with H/N cancer treated with radiation (70 Gy in 35 fractions) were investigated. A deformable registration algorithm was used to fuse planning CT’s to CBCT’s. Tumor volume was automatically segmented on each CBCT image dataset. Local control at 1-year was used to classify 8 patients as responders (R), and 6 as non-responders (NR). A smoothing filter [2D Adaptive Weiner (2DAW) with 3 different windows (ψ=3, 5, and 7)], and two noise models (Poisson and Gaussian, SNR=25) were implemented, and independently applied to CBCT images. Twenty-two textural features, describing the spatial arrangement of voxel intensities calculated from gray-level co-occurrence matrices, were extracted for all tumor volumes. Results: Relative to CBCT images without smoothing, none of 22 textural features extracted showed any significant differences when smoothing was applied (using the 2DAW with filtering parameters of ψ=3 and 5), in the responder and non-responder groups. When smoothing, 2DAW with ψ=7 was applied, one textural feature, Information Measure of Correlation, was significantly different relative to no smoothing. Only 4 features (Energy, Entropy, Homogeneity, and Maximum-Probability) were found to be statistically different between the R and NR groups (Table 1). These features remained statistically significant discriminators for R and NR groups in presence of noise and smoothing. Conclusion: This preliminary work suggests that textural classifiers for response prediction, extracted from H&N CBCT images, are robust to low-power noise and low-pass filtering. While other types of filters will alter the spatial frequencies differently, these results are promising. The current study is subject to Type II errors. A much

  15. Hardware-efficient robust biometric identification from 0.58 second template and 12 features of limb (Lead I) ECG signal using logistic regression classifier.

    Science.gov (United States)

    Sahadat, Md Nazmus; Jacobs, Eddie L; Morshed, Bashir I

    2014-01-01

    The electrocardiogram (ECG), widely known as a cardiac diagnostic signal, has recently been proposed for biometric identification of individuals; however reliability and reproducibility are of research interest. In this paper, we propose a template matching technique with 12 features using logistic regression classifier that achieved high reliability and identification accuracy. Non-invasive ECG signals were captured using our custom-built ambulatory EEG/ECG embedded device (NeuroMonitor). ECG data were collected from healthy subjects (10), between 25-35 years, for 10 seconds per trial. The number of trials from each subject was 10. From each trial, only 0.58 seconds of Lead I ECG data were used as template. Hardware-efficient fiducial point detection technique was implemented for feature extraction. To obtain repeated random sub-sampling validation, data were randomly separated into training and testing sets at a ratio of 80:20. Test data were used to find the classification accuracy. ECG template data with 12 extracted features provided the best performance in terms of accuracy (up to 100%) and processing complexity (computation time of 1.2ms). This work shows that a single limb (Lead I) ECG can robustly identify an individual quickly and reliably with minimal contact and data processing using the proposed algorithm.

  16. A combination of molecular markers and clinical features improve the classification of pancreatic cysts.

    Science.gov (United States)

    Springer, Simeon; Wang, Yuxuan; Dal Molin, Marco; Masica, David L; Jiao, Yuchen; Kinde, Isaac; Blackford, Amanda; Raman, Siva P; Wolfgang, Christopher L; Tomita, Tyler; Niknafs, Noushin; Douville, Christopher; Ptak, Janine; Dobbyn, Lisa; Allen, Peter J; Klimstra, David S; Schattner, Mark A; Schmidt, C Max; Yip-Schneider, Michele; Cummings, Oscar W; Brand, Randall E; Zeh, Herbert J; Singhi, Aatur D; Scarpa, Aldo; Salvia, Roberto; Malleo, Giuseppe; Zamboni, Giuseppe; Falconi, Massimo; Jang, Jin-Young; Kim, Sun-Whe; Kwon, Wooil; Hong, Seung-Mo; Song, Ki-Byung; Kim, Song Cheol; Swan, Niall; Murphy, Jean; Geoghegan, Justin; Brugge, William; Fernandez-Del Castillo, Carlos; Mino-Kenudson, Mari; Schulick, Richard; Edil, Barish H; Adsay, Volkan; Paulino, Jorge; van Hooft, Jeanin; Yachida, Shinichi; Nara, Satoshi; Hiraoka, Nobuyoshi; Yamao, Kenji; Hijioka, Susuma; van der Merwe, Schalk; Goggins, Michael; Canto, Marcia Irene; Ahuja, Nita; Hirose, Kenzo; Makary, Martin; Weiss, Matthew J; Cameron, John; Pittman, Meredith; Eshleman, James R; Diaz, Luis A; Papadopoulos, Nickolas; Kinzler, Kenneth W; Karchin, Rachel; Hruban, Ralph H; Vogelstein, Bert; Lennon, Anne Marie

    2015-11-01

    The management of pancreatic cysts poses challenges to both patients and their physicians. We investigated whether a combination of molecular markers and clinical information could improve the classification of pancreatic cysts and management of patients. We performed a multi-center, retrospective study of 130 patients with resected pancreatic cystic neoplasms (12 serous cystadenomas, 10 solid pseudopapillary neoplasms, 12 mucinous cystic neoplasms, and 96 intraductal papillary mucinous neoplasms). Cyst fluid was analyzed to identify subtle mutations in genes known to be mutated in pancreatic cysts (BRAF, CDKN2A, CTNNB1, GNAS, KRAS, NRAS, PIK3CA, RNF43, SMAD4, TP53, and VHL); to identify loss of heterozygozity at CDKN2A, RNF43, SMAD4, TP53, and VHL tumor suppressor loci; and to identify aneuploidy. The analyses were performed using specialized technologies for implementing and interpreting massively parallel sequencing data acquisition. An algorithm was used to select markers that could classify cyst type and grade. The accuracy of the molecular markers was compared with that of clinical markers and a combination of molecular and clinical markers. We identified molecular markers and clinical features that classified cyst type with 90%-100% sensitivity and 92%-98% specificity. The molecular marker panel correctly identified 67 of the 74 patients who did not require surgery and could, therefore, reduce the number of unnecessary operations by 91%. We identified a panel of molecular markers and clinical features that show promise for the accurate classification of cystic neoplasms of the pancreas and identification of cysts that require surgery. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.

  17. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  18. Semantic Feature Training in Combination with Transcranial Direct Current Stimulation (tDCS for Progressive Anomia

    Directory of Open Access Journals (Sweden)

    Jinyi Hung

    2017-05-01

    Full Text Available We examined the effectiveness of a 2-week regimen of a semantic feature training in combination with transcranial direct current stimulation (tDCS for progressive naming impairment associated with primary progressive aphasia (N = 4 or early onset Alzheimer’s Disease (N = 1. Patients received a 2-week regimen (10 sessions of anodal tDCS delivered over the left temporoparietal cortex while completing a language therapy that consisted of repeated naming and semantic feature generation. Therapy targets consisted of familiar people, household items, clothes, foods, places, hygiene implements, and activities. Untrained items from each semantic category provided item level controls. We analyzed naming accuracies at multiple timepoints (i.e., pre-, post-, 6-month follow-up via a mixed effects logistic regression and individual differences in treatment responsiveness using a series of non-parametric McNemar tests. Patients showed advantages for naming trained over untrained items. These gains were evident immediately post tDCS. Trained items also showed a shallower rate of decline over 6-months relative to untrained items that showed continued progressive decline. Patients tolerated stimulation well, and sustained improvements in naming accuracy suggest that the current intervention approach is viable. Future implementation of a sham control condition will be crucial toward ascertaining whether neurostimulation and behavioral treatment act synergistically or alternatively whether treatment gains are exclusively attributable to either tDCS or the behavioral intervention.

  19. Recognizing stationary and locomotion activities using combinational of spectral analysis with statistical descriptors features

    Science.gov (United States)

    Zainudin, M. N. Shah; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran

    2017-10-01

    Prior knowledge in pervasive computing recently garnered a lot of attention due to its high demand in various application domains. Human activity recognition (HAR) considered as the applications that are widely explored by the expertise that provides valuable information to the human. Accelerometer sensor-based approach is utilized as devices to undergo the research in HAR since their small in size and this sensor already build-in in the various type of smartphones. However, the existence of high inter-class similarities among the class tends to degrade the recognition performance. Hence, this work presents the method for activity recognition using our proposed features from combinational of spectral analysis with statistical descriptors that able to tackle the issue of differentiating stationary and locomotion activities. The noise signal is filtered using Fourier Transform before it will be extracted using two different groups of features, spectral frequency analysis, and statistical descriptors. Extracted signal later will be classified using random forest ensemble classifier models. The recognition results show the good accuracy performance for stationary and locomotion activities based on USC HAD datasets.

  20. Planetary gearbox fault feature enhancement based on combined adaptive filter method

    Directory of Open Access Journals (Sweden)

    Shuangshu Tian

    2015-12-01

    Full Text Available The reliability of vibration signals acquired from a planetary gear system (the indispensable part of wind turbine gearbox is directly related to the accuracy of fault diagnosis. The complex operation environment leads to lots of interference signals which are included in the vibration signals. Furthermore, both multiple gears meshing with each other and the differences in transmission rout produce strong nonlinearity in the vibration signals, which makes it difficult to eliminate the noise. This article presents a combined adaptive filter method by taking a delayed signal as reference signal, the Self-Adaptive Noise Cancellation method is adopted to eliminate the white noise. In the meanwhile, by applying Gaussian function to transform the input signal into high-dimension feature-space signal, the kernel least mean square algorithm is used to cancel the nonlinear interference. Effectiveness of the method has been verified by simulation signals and test rig signals. By dealing with simulation signal, the signal-to-noise ratio can be improved around 30 dB (white noise and the amplitude of nonlinear interference signal can be depressed up to 50%. Experimental results show remarkable improvements and enhance gear fault features.

  1. Visual Localization across Seasons Using Sequence Matching Based on Multi-Feature Combination.

    Science.gov (United States)

    Qiao, Yongliang

    2017-10-25

    Visual localization is widely used in autonomous navigation system and Advanced Driver Assistance Systems (ADAS). However, visual-based localization in seasonal changing situations is one of the most challenging topics in computer vision and the intelligent vehicle community. The difficulty of this task is related to the strong appearance changes that occur in scenes due to weather or season changes. In this paper, a place recognition based visual localization method is proposed, which realizes the localization by identifying previously visited places using the sequence matching method. It operates by matching query image sequences to an image database acquired previously (video acquired during traveling period). In this method, in order to improve matching accuracy, multi-feature is constructed by combining a global GIST descriptor and local binary feature CSLBP (Center-symmetric local binary patterns) to represent image sequence. Then, similarity measurement according to Chi-square distance is used for effective sequences matching. For experimental evaluation, the relationship between image sequence length and sequences matching performance is studied. To show its effectiveness, the proposed method is tested and evaluated in four seasons outdoor environments. The results have shown improved precision-recall performance against the state-of-the-art SeqSLAM algorithm.

  2. Visual Localization across Seasons Using Sequence Matching Based on Multi-Feature Combination

    Directory of Open Access Journals (Sweden)

    Yongliang Qiao

    2017-10-01

    Full Text Available Visual localization is widely used in autonomous navigation system and Advanced Driver Assistance Systems (ADAS. However, visual-based localization in seasonal changing situations is one of the most challenging topics in computer vision and the intelligent vehicle community. The difficulty of this task is related to the strong appearance changes that occur in scenes due to weather or season changes. In this paper, a place recognition based visual localization method is proposed, which realizes the localization by identifying previously visited places using the sequence matching method. It operates by matching query image sequences to an image database acquired previously (video acquired during traveling period. In this method, in order to improve matching accuracy, multi-feature is constructed by combining a global GIST descriptor and local binary feature CSLBP (Center-symmetric local binary patterns to represent image sequence. Then, similarity measurement according to Chi-square distance is used for effective sequences matching. For experimental evaluation, the relationship between image sequence length and sequences matching performance is studied. To show its effectiveness, the proposed method is tested and evaluated in four seasons outdoor environments. The results have shown improved precision–recall performance against the state-of-the-art SeqSLAM algorithm.

  3. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  4. An autopsy study of combined pulmonary fibrosis and emphysema: correlations among clinical, radiological, and pathological features

    Science.gov (United States)

    2014-01-01

    Background Clinical evaluation to differentiate the characteristic features of pulmonary fibrosis and emphysema is often difficult in patients with combined pulmonary fibrosis and emphysema (CPFE), but diagnosis of pulmonary fibrosis is important for evaluating treatment options and the risk of acute exacerbation of interstitial pneumonia of such patients. As far as we know, it is the first report describing a correlation among clinical, radiological, and whole-lung pathological features in an autopsy cases of CPFE patients. Methods Experts retrospectively reviewed the clinical charts and examined chest computed tomography (CT) images and pathological findings of an autopsy series of 22 CPFE patients, and compared these with findings from 8 idiopathic pulmonary fibrosis (IPF) patients and 17 emphysema-alone patients. Results All patients had a history of heavy smoking. Forced expiratory volume in 1 s/forced vital capacity (FEV1/FVC%) was significantly lower in the emphysema-alone group than the CPFE and IPF-alone groups. The percent predicted diffusing capacity of the lung for carbon monoxide (DLCO%) was significantly lower in the CPFE group than the IPF- and emphysema-alone groups. Usual interstitial pneumonia (UIP) pattern was observed radiologically in 15 (68.2%) CPFE and 8 (100%) IPF-alone patients and was pathologically observed in all patients from both groups. Pathologically thick-cystic lesions involving one or more acini with dense wall fibrosis and occasional fibroblastic foci surrounded by honeycombing and normal alveoli were confirmed by post-mortem observation as thick-walled cystic lesions (TWCLs). Emphysematous destruction and enlargement of membranous and respiratory bronchioles with fibrosis were observed in the TWCLs. The cystic lesions were always larger than the cysts of honeycombing. The prevalence of both radiological and pathological TWCLs was 72.7% among CPFE patients, but no such lesions were observed in patients with IPF or emphysema

  5. Downlink Radio Resource Management for LTE-Advanced System with Combined MU-MIMO and Carrier Aggregation Features

    DEFF Research Database (Denmark)

    Nguyen, Hung Tuan; Kovacs, Istvan

    2012-01-01

    In this paper we study the performance enhancement of a downlink LTE-Advanced system with a combination of the multi-user MIMO and carrier aggregation transmission techniques. Radio resource management for the systems with the combined features are proposed, and the system performance is evaluate...

  6. The Clinical and Immunologic Features of Patients With Combined Anti-GBM Disease and Castleman Disease.

    Science.gov (United States)

    Gu, Qiu-Hua; Jia, Xiao-Yu; Hu, Shui-Yi; Wang, Su-Xia; Zou, Wan-Zhong; Cui, Zhao; Zhao, Ming-Hui

    2018-06-01

    Patients with both anti-glomerular basement membrane (anti-GBM) disease and Castleman disease have been rarely reported. In this study, we report 3 patients with this combination. They had immunologic features similar to patients with classic anti-GBM disease. Sera from the 3 patients recognized the noncollagenous (NC) domain of the α3 chain of type IV collagen (α3(IV)NC1) and its 2 major epitopes, EA and EB. All 4 immunogloblin G (IgG) subclasses against α3(IV)NC1 were detectable, with predominance of IgG1. In one patient with lymph node biopsy specimens available, sporadic plasma cells producing α3(IV)NC1-IgG were found, suggesting a causal relationship between the 2 diseases. One patient, who achieved remission with antibody clearance and normalization of serum creatinine and interleukin 6 concentrations after plasma exchange and 3 cycles of chemotherapy, experienced recurrence of anti-GBM antibodies and an increase in interleukin 6 concentration after chemotherapy discontinuation because of adverse effects, but both returned to normal after another cycle of chemotherapy. This clinical course and the pathologic findings support the hypothesis that the Castleman disease-associated tumor cells are the source of the anti-GBM autoantibodies. Copyright © 2018 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  7. Registration Combining Wide and Narrow Baseline Feature Tracking Techniques for Markerless AR Systems

    Directory of Open Access Journals (Sweden)

    Bo Yang

    2009-12-01

    Full Text Available Augmented reality (AR is a field of computer research which deals with the combination of real world and computer generated data. Registration is one of the most difficult problems currently limiting the usability of AR systems. In this paper, we propose a novel natural feature tracking based registration method for AR applications. The proposed method has following advantages: (1 it is simple and efficient, as no man-made markers are needed for both indoor and outdoor AR applications; moreover, it can work with arbitrary geometric shapes including planar, near planar and non planar structures which really enhance the usability of AR systems. (2 Thanks to the reduced SIFT based augmented optical flow tracker, the virtual scene can still be augmented on the specified areas even under the circumstances of occlusion and large changes in viewpoint during the entire process. (3 It is easy to use, because the adaptive classification tree based matching strategy can give us fast and accurate initialization, even when the initial camera is different from the reference image to a large degree. Experimental evaluations validate the performance of the proposed method for online pose tracking and augmentation.

  8. Concepts, features, and design of a sixteen-to-four beam combiner for ILSE [Induction Linac Systems Experiment

    International Nuclear Information System (INIS)

    Judd, D.L.; Celata, C.; Close, E.; Faltens, A.; Hahn, K.; La Mon, K.; Lee, E.P.; Smith, L.; Thur, W.

    1989-03-01

    Sixteen intense parallel ion beams are to be transversely combined into four by dispersionless double bends. Emittance growth due to electrostatic energy redistribution and to the geometry is evaluated. Most bending elements are electric, and alternate with AG electrostatic quadrupoles similar to those upstream. The final elements are magnetic, combining focusing and ''unbending''. Electrode shapes and pulsed-current arrays (having very small clearances), and mechanical and electric features of the combiner, and described. 1 ref., 7 figs

  9. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  10. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  11. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  12. C2 emission features in the Red Rectangle : A combined observational laboratory study

    NARCIS (Netherlands)

    Wehres, N.; Romanzin, C.; Linnartz, H.; van Winckel, H.; Tielens, A. G. G. M.

    Context. The Red Rectangle proto-planetary nebula (HD 44179) is known for a number of rather narrow emission features superimposed on a broad extended red emission (ERE) covering the 5000-7500 angstrom regime. The origin of these emission features is unknown. Aims. The aim of the present work is to

  13. Single and Combined Diagnostic Value of Clinical Features and Laboratory Tests in Acute Appendicitis

    NARCIS (Netherlands)

    Laméris, Wytze; van Randen, Adrienne; Go, Peter M. N. Y. H.; Bouma, Wim H.; Donkervoort, Sandra C.; Bossuyt, Patrick M. M.; Stoker, Jaap; Boermeester, Marja A.

    2009-01-01

    Objectives: The objective was to evaluate the diagnostic accuracy of clinical features and laboratory test results in detecting acute appendicitis. Methods: Clinical features and laboratory test results were prospectively recorded in a consecutive series of 1,101 patients presenting with abdominal

  14. Robust synthesis of gold cubic nanoframes through a combination of galvanic replacement, gold deposition, and silver dealloying.

    Science.gov (United States)

    Wan, Dehui; Xia, Xiaohu; Wang, Yucai; Xia, Younan

    2013-09-23

    A facile, robust approach to the synthesis of Au cubic nanoframes is described. The synthesis involves three major steps: 1) preparation of Au-Ag alloyed nanocages using a galvanic replacement reaction between Ag nanocubes and HAuCl4 ; 2) deposition of thin layers of pure Au onto the surfaces of the nanocages by reducing HAuCl4 with ascorbic acid, and; 3) formation of Au cubic nanoframes through a dealloying process with HAuCl4 . The key to the formation of Au cubic nanoframes is to coat the surfaces of the Au-Ag nanocages with sufficiently thick layers of Au before they are dealloyed. The Au layer could prevent the skeleton of a nanocage from being fragmented during the dealloying step. The as-prepared Au cubic nanoframes exhibit tunable localized surface plasmon resonance peaks in the near-infrared region, but with much lower Ag content as compared with the initial Au-Ag nanocages. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Detection of corn and weed species by the combination of spectral, shape and textural features

    Science.gov (United States)

    Accurate detection of weeds in farmland can help reduce pesticide use and protect the agricultural environment. To develop intelligent equipment for weed detection, this study used an imaging spectrometer system, which supports micro-scale plant feature analysis by acquiring high-resolution hyper sp...

  17. Mitochondrial DNA deletion in a patient with combined features of Leigh and Pearson syndromes

    Energy Technology Data Exchange (ETDEWEB)

    Blok, R.B.; Thorburn, D.R.; Danks, D.M. [Royal Children`s Hospital, Melbourne (Australia)] [and others

    1994-09-01

    We describe a heteroplasmic 4237 bp mitochondrial DNA (mtDNA) deletion in an 11 year old girl who has suffered from progressive illness since birth. She has some features of Leigh syndrome (global developmental delay with regression, brainstem dysfunction and lactic acidosis), together with other features suggestive of Pearson syndrome (history of pancytopenia and failure to thrive). The deletion was present at a level greater than 50% in skeletal muscle, but barely detectable in skin fibroblasts following Southern blot analysis, and only observed in blood following PCR analysis. The deletion spanned nt 9498 to nt 13734, and was flanked by a 12 bp direct repeat. Genes for cytochrome c oxidase subunit III, NADH dehydrogenase subunits 3, 4L, 4 and 5, and tRNAs for glycine, arginine, histidine, serine({sup AGY}) and leucine({sup CUN}) were deleted. Southern blotting also revealed an altered Apa I restriction site which was shown by sequence analysis to be caused by G{r_arrow}A nucleotide substitution at nt 1462 in the 12S rRNA gene. This was presumed to be a polymorphism. No abnormalities of mitochondrial ultrastructure, distribution or of respiratory chain enzyme complexes I-IV in skeletal muscle were observed. Mitochondrial disorders with clinical features overlapping more than one syndrome have been reported previously. This case further demonstrates the difficulty in correlating observed clinical features with a specific mitochondrial DNA mutation.

  18. F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation

    OpenAIRE

    Wu, Xiaohe; Zuo, Wangmeng; Zhu, Yuanyuan; Lin, Liang

    2015-01-01

    The generalization error bound of support vector machine (SVM) depends on the ratio of radius and margin, while standard SVM only considers the maximization of the margin but ignores the minimization of the radius. Several approaches have been proposed to integrate radius and margin for joint learning of feature transformation and SVM classifier. However, most of them either require the form of the transformation matrix to be diagonal, or are non-convex and computationally expensive. In this ...

  19. a Framework of Change Detection Based on Combined Morphologica Features and Multi-Index Classification

    Science.gov (United States)

    Li, S.; Zhang, S.; Yang, D.

    2017-09-01

    Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  20. A FRAMEWORK OF CHANGE DETECTION BASED ON COMBINED MORPHOLOGICA FEATURES AND MULTI-INDEX CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Li

    2017-09-01

    Full Text Available Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI, the differential water index (NDWI are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.

  1. Combining metal oxide affinity chromatography (MOAC and selective mass spectrometry for robust identification of in vivo protein phosphorylation sites

    Directory of Open Access Journals (Sweden)

    Weckwerth Wolfram

    2005-11-01

    Full Text Available Abstract Background Protein phosphorylation is accepted as a major regulatory pathway in plants. More than 1000 protein kinases are predicted in the Arabidopsis proteome, however, only a few studies look systematically for in vivo protein phosphorylation sites. Owing to the low stoichiometry and low abundance of phosphorylated proteins, phosphorylation site identification using mass spectrometry imposes difficulties. Moreover, the often observed poor quality of mass spectra derived from phosphopeptides results frequently in uncertain database hits. Thus, several lines of evidence have to be combined for a precise phosphorylation site identification strategy. Results Here, a strategy is presented that combines enrichment of phosphoproteins using a technique termed metaloxide affinity chromatography (MOAC and selective ion trap mass spectrometry. The complete approach involves (i enrichment of proteins with low phosphorylation stoichiometry out of complex mixtures using MOAC, (ii gel separation and detection of phosphorylation using specific fluorescence staining (confirmation of enrichment, (iii identification of phosphoprotein candidates out of the SDS-PAGE using liquid chromatography coupled to mass spectrometry, and (iv identification of phosphorylation sites of these enriched proteins using automatic detection of H3PO4 neutral loss peaks and data-dependent MS3-fragmentation of the corresponding MS2-fragment. The utility of this approach is demonstrated by the identification of phosphorylation sites in Arabidopsis thaliana seed proteins. Regulatory importance of the identified sites is indicated by conservation of the detected sites in gene families such as ribosomal proteins and sterol dehydrogenases. To demonstrate further the wide applicability of MOAC, phosphoproteins were enriched from Chlamydomonas reinhardtii cell cultures. Conclusion A novel phosphoprotein enrichment procedure MOAC was applied to seed proteins of A. thaliana and to

  2. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects

    Science.gov (United States)

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  3. A robust observer based on H∞ filtering with parameter uncertainties combined with Neural Networks for estimation of vehicle roll angle

    Science.gov (United States)

    Boada, Beatriz L.; Boada, Maria Jesus L.; Vargas-Melendez, Leandro; Diaz, Vicente

    2018-01-01

    Nowadays, one of the main objectives in road transport is to decrease the number of accident victims. Rollover accidents caused nearly 33% of all deaths from passenger vehicle crashes. Roll Stability Control (RSC) systems prevent vehicles from untripped rollover accidents. The lateral load transfer is the main parameter which is taken into account in the RSC systems. This parameter is related to the roll angle, which can be directly measured from a dual-antenna GPS. Nevertheless, this is a costly technique. For this reason, roll angle has to be estimated. In this paper, a novel observer based on H∞ filtering in combination with a neural network (NN) for the vehicle roll angle estimation is proposed. The design of this observer is based on four main criteria: to use a simplified vehicle model, to use signals of sensors which are installed onboard in current vehicles, to consider the inaccuracy in the system model and to attenuate the effect of the external disturbances. Experimental results show the effectiveness of the proposed observer.

  4. Predicting human splicing branchpoints by combining sequence-derived features and multi-label learning methods.

    Science.gov (United States)

    Zhang, Wen; Zhu, Xiaopeng; Fu, Yu; Tsuji, Junko; Weng, Zhiping

    2017-12-01

    Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints. Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method. In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.

  5. Features of ophthalmoneuroprotection in patients with open-angle glaucoma in combination with diabetic retinopathy

    Directory of Open Access Journals (Sweden)

    O. I. Borzunov

    2014-07-01

    Full Text Available Aim: of this study is to evaluate ophthalmoneuroprotectional treatment of patients with POAG and diabetes mellitus type II in a specialized hospital.Material and methods: We have performed retro — and prospective analysis of the combined treatment of 130 patients (248 eyes with a combination of primary open-angle glaucoma and diabetic retinopathy. Evaluation of the effectiveness of treatment was conducted on the following criteria: the severity of the hypotensive effect, the degreeof improvement and duration of remission of major ophthalmic indicators. The patients were divided into four clinical — homogeneous groups: primary — 40 people (77 eyes, the comparison group I — 37 persons (71 eyes, the comparison group II — 33 people (60 eyes, the control group — 20 people (40 eyes.Results: Combination of different treatment strategy of laser and conservative treatment was tested. The result has been designed for optimal balance improving performance and reducing ocular side effects. Retinalamin ® 5 mg parabulbare number 10, Tanakan 1 tablet 3 times a day — 3 months. The optimal timing of re-treatment (at least 1time in 9 months, in the case of significant progression of glaucomatous optic neuropathy — the timing is solved individually.

  6. TRAIL and proteasome inhibitors combination induces a robust apoptosis in human malignant pleural mesothelioma cells through Mcl-1 and Akt protein cleavages

    International Nuclear Information System (INIS)

    Yuan, Bao-Zhu; Chapman, Joshua; Ding, Min; Wang, Junzhi; Jiang, Binghua; Rojanasakul, Yon; Reynolds, Steven H

    2013-01-01

    Malignant pleural mesothelioma (MPM) is an aggressive malignancy closely associated with asbestos exposure and extremely resistant to current treatments. It exhibits a steady increase in incidence, thus necessitating an urgent development of effective new treatments. Proteasome inhibitors (PIs) and TNFα-Related Apoptosis Inducing Ligand (TRAIL), have emerged as promising new anti-MPM agents. To develop effective new treatments, the proapoptotic effects of PIs, MG132 or Bortezomib, and TRAIL were investigated in MPM cell lines NCI-H2052, NCI-H2452 and NCI-H28, which represent three major histological types of human MPM. Treatment with 0.5-1 μM MG132 alone or 30 ng/mL Bortezomib alone induced a limited apoptosis in MPM cells associated with the elevated Mcl-1 protein level and hyperactive PI3K/Akt signaling. However, whereas 10–20 ng/ml TRAIL alone induced a limited apoptosis as well, TRAIL and PI combination triggered a robust apoptosis in all three MPM cell lines. The robust proapoptotic activity was found to be the consequence of a positive feedback mechanism-governed amplification of caspase activation and cleavage of both Mcl-1 and Akt proteins, and exhibited a relative selectivity in MPM cells than in non-tumorigenic Met-5A mesothelial cells. The combinatorial treatment using TRAIL and PI may represent an effective new treatment for MPMs

  7. Feature Extraction

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  8. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    Science.gov (United States)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  9. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  10. Combination of silicon nitride and porous silicon induced optoelectronic features enhancement of multicrystalline silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Rabha, Mohamed Ben; Dimassi, Wissem; Gaidi, Mounir; Ezzaouia, Hatem; Bessais, Brahim [Laboratoire de Photovoltaique, Centre de Recherches et des Technologies de l' Energie, Technopole de Borj-Cedria, BP 95, 2050 Hammam-Lif (Tunisia)

    2011-06-15

    The effects of antireflection (ARC) and surface passivation films on optoelectronic features of multicrystalline silicon (mc-Si) were investigated in order to perform high efficiency solar cells. A double layer consisting of Plasma Enhanced Chemical Vapor Deposition (PECVD) of silicon nitride (SiN{sub x}) on porous silicon (PS) was achieved on mc-Si surfaces. It was found that this treatment decreases the total surface reflectivity from about 25% to around 6% in the 450-1100 nm wavelength range. As a result, the effective minority carrier diffusion length, estimated from the Laser-beam-induced current (LBIC) method, was found to increase from 312 {mu}m for PS-treated cells to about 798 {mu}m for SiN{sub x}/PS-treated ones. The deposition of SiN{sub x} was found to impressively enhance the minority carrier diffusion length probably due to hydrogen passivation of surface, grain boundaries and bulk defects. Fourier Transform Infrared Spectroscopy (FTIR) shows that the vibration modes of the highly suitable passivating Si-H bonds exhibit frequency shifts toward higher wavenumber, depending on the x ratio of the introduced N atoms neighbors. (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  11. Altered temporal features of intrinsic connectivity networks in boys with combined type of attention deficit hyperactivity disorder

    International Nuclear Information System (INIS)

    Wang, Xun-Heng; Li, Lihua

    2015-01-01

    Highlights: • Temporal patterns within ICNs provide new way to investigate ADHD brains. • ADHD exhibits enhanced temporal activities within and between ICNs. • Network-wise ALFF influences functional connectivity between ICNs. • Univariate patterns within ICNs are correlated to behavior scores. - Abstract: Purpose: Investigating the altered temporal features within and between intrinsic connectivity networks (ICNs) for boys with attention-deficit/hyperactivity disorder (ADHD); and analyzing the relationships between altered temporal features within ICNs and behavior scores. Materials and methods: A cohort of boys with combined type of ADHD and a cohort of age-matched healthy boys were recruited from ADHD-200 Consortium. All resting-state fMRI datasets were preprocessed and normalized into standard brain space. Using general linear regression, 20 ICNs were taken as spatial templates to analyze the time-courses of ICNs for each subject. Amplitude of low frequency fluctuations (ALFFs) were computed as univariate temporal features within ICNs. Pearson correlation coefficients and node strengths were computed as bivariate temporal features between ICNs. Additional correlation analysis was performed between temporal features of ICNs and behavior scores. Results: ADHD exhibited more activated network-wise ALFF than normal controls in attention and default mode-related network. Enhanced functional connectivities between ICNs were found in ADHD. The network-wise ALFF within ICNs might influence the functional connectivity between ICNs. The temporal pattern within posterior default mode network (pDMN) was positively correlated to inattentive scores. The subcortical network, fusiform-related DMN and attention-related networks were negatively correlated to Intelligence Quotient (IQ) scores. Conclusion: The temporal low frequency oscillations of ICNs in boys with ADHD were more activated than normal controls during resting state; the temporal features within ICNs could

  12. Altered temporal features of intrinsic connectivity networks in boys with combined type of attention deficit hyperactivity disorder

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xun-Heng, E-mail: xhwang@hdu.edu.cn [College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018 (China); School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096 (China); Li, Lihua [College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018 (China)

    2015-05-15

    Highlights: • Temporal patterns within ICNs provide new way to investigate ADHD brains. • ADHD exhibits enhanced temporal activities within and between ICNs. • Network-wise ALFF influences functional connectivity between ICNs. • Univariate patterns within ICNs are correlated to behavior scores. - Abstract: Purpose: Investigating the altered temporal features within and between intrinsic connectivity networks (ICNs) for boys with attention-deficit/hyperactivity disorder (ADHD); and analyzing the relationships between altered temporal features within ICNs and behavior scores. Materials and methods: A cohort of boys with combined type of ADHD and a cohort of age-matched healthy boys were recruited from ADHD-200 Consortium. All resting-state fMRI datasets were preprocessed and normalized into standard brain space. Using general linear regression, 20 ICNs were taken as spatial templates to analyze the time-courses of ICNs for each subject. Amplitude of low frequency fluctuations (ALFFs) were computed as univariate temporal features within ICNs. Pearson correlation coefficients and node strengths were computed as bivariate temporal features between ICNs. Additional correlation analysis was performed between temporal features of ICNs and behavior scores. Results: ADHD exhibited more activated network-wise ALFF than normal controls in attention and default mode-related network. Enhanced functional connectivities between ICNs were found in ADHD. The network-wise ALFF within ICNs might influence the functional connectivity between ICNs. The temporal pattern within posterior default mode network (pDMN) was positively correlated to inattentive scores. The subcortical network, fusiform-related DMN and attention-related networks were negatively correlated to Intelligence Quotient (IQ) scores. Conclusion: The temporal low frequency oscillations of ICNs in boys with ADHD were more activated than normal controls during resting state; the temporal features within ICNs could

  13. Identification of endometrial cancer methylation features using combined methylation analysis methods.

    Directory of Open Access Journals (Sweden)

    Michael P Trimarchi

    Full Text Available DNA methylation is a stable epigenetic mark that is frequently altered in tumors. DNA methylation features are attractive biomarkers for disease states given the stability of DNA methylation in living cells and in biologic specimens typically available for analysis. Widespread accumulation of methylation in regulatory elements in some cancers (specifically the CpG island methylator phenotype, CIMP can play an important role in tumorigenesis. High resolution assessment of CIMP for the entire genome, however, remains cost prohibitive and requires quantities of DNA not available for many tissue samples of interest. Genome-wide scans of methylation have been undertaken for large numbers of tumors, and higher resolution analyses for a limited number of cancer specimens. Methods for analyzing such large datasets and integrating findings from different studies continue to evolve. An approach for comparison of findings from a genome-wide assessment of the methylated component of tumor DNA and more widely applied methylation scans was developed.Methylomes for 76 primary endometrial cancer and 12 normal endometrial samples were generated using methylated fragment capture and second generation sequencing, MethylCap-seq. Publically available Infinium HumanMethylation 450 data from The Cancer Genome Atlas (TCGA were compared to MethylCap-seq data.Analysis of methylation in promoter CpG islands (CGIs identified a subset of tumors with a methylator phenotype. We used a two-stage approach to develop a 13-region methylation signature associated with a "hypermethylator state." High level methylation for the 13-region methylation signatures was associated with mismatch repair deficiency, high mutation rate, and low somatic copy number alteration in the TCGA test set. In addition, the signature devised showed good agreement with previously described methylation clusters devised by TCGA.We identified a methylation signature for a "hypermethylator phenotype" in

  14. Combination of Biorthogonal Wavelet Hybrid Kernel OCSVM with Feature Weighted Approach Based on EVA and GRA in Financial Distress Prediction

    Directory of Open Access Journals (Sweden)

    Chao Huang

    2014-01-01

    Full Text Available Financial distress prediction plays an important role in the survival of companies. In this paper, a novel biorthogonal wavelet hybrid kernel function is constructed by combining linear kernel function with biorthogonal wavelet kernel function. Besides, a new feature weighted approach is presented based on economic value added (EVA and grey relational analysis (GRA. Considering the imbalance between financially distressed companies and normal ones, the feature weighted one-class support vector machine based on biorthogonal wavelet hybrid kernel (BWH-FWOCSVM is further put forward for financial distress prediction. The empirical study with real data from the listed companies on Growth Enterprise Market (GEM in China shows that the proposed approach has good performance.

  15. Combining multiple hypothesis testing and affinity propagation clustering leads to accurate, robust and sample size independent classification on gene expression data

    Directory of Open Access Journals (Sweden)

    Sakellariou Argiris

    2012-10-01

    Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.

  16. A versatile method for combining different biopolymers in a core/shell fashion by 3D plotting to achieve mechanically robust constructs.

    Science.gov (United States)

    Akkineni, Ashwini Rahul; Ahlfeld, Tilman; Lode, Anja; Gelinsky, Michael

    2016-10-07

    Three-dimensional extrusion of two different biomaterials in a core/shell (c/s) fashion has gained much interest in the last couple of years as it allows for fabricating constructs with novel and interesting properties. We now demonstrate that combining high concentrated (16.7 wt%) alginate hydrogels as shell material with low concentrated, soft biopolymer hydrogels as core leads to mechanically stable and robust 3D scaffolds. Alginate, chitosan, gellan gum, gelatin and collagen hydrogels were utilized successfully as core materials-hydrogels which are too soft for 3D plotting of open-porous structures without an additional mechanical support. The respective c/s scaffolds were characterized concerning their morphology, mechanical properties and swelling behavior. It could be shown that core as well as shell part can be loaded with growth factors and that the release depends on core composition and shell thickness. Neither the plotting process nor the crosslinking with 1M CaCl 2 denatured the proteins. When core and shell were loaded with different growth factors (VEGF and BMP-2, respectively) a dual release was achieved. Finally, live human endothelial cells were integrated in the core material, demonstrating that this new strategy can be used for bioprinting purposes as well.

  17. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  18. The Effect of Creative Tasks on Electrocardiogram: Using Linear and Nonlinear Features in Combination with Classification Approaches

    Directory of Open Access Journals (Sweden)

    Sahar Zakeri

    2017-02-01

    Full Text Available Objective: Interest in the subject of creativity and its impacts on human life is growing extensively. However, only a few surveys pay attention to the relation between creativity and physiological changes. This paper presents a novel approach to distinguish between creativity states from electrocardiogram signals. Nineteen linear and nonlinear features of the cardiac signal were extracted to detect creativity states. Method: ECG signals of 52 participants were recorded while doing three tasks of Torrance Tests of Creative Thinking (TTCT/ figural B. To remove artifacts, notch filter 50 Hz and Chebyshev II were applied. According to TTCT scores, participants were categorized into the high and low creativity groups: Participants with scores higher than 70 were assigned into the high creativity group and those with scores less than 30 were considered as low creativity group. Some linear and nonlinear features were extracted from the ECGs. Then, Support Vector Machine (SVM and Adaptive Neuro-Fuzzy Inference System (ANFIS were used to classify the groups.Results: Applying the Wilcoxon test, significant differences were observed between rest and each three tasks of creativity. However, better discrimination was performed between rest and the first task. In addition, there were no statistical differences between the second and third task of the test. The results indicated that the SVM effectively detects all the three tasks from the rest, particularly the task 1 and reached the maximum accuracy of 99.63% in the linear analysis. In addition, the high creative group was separated from the low creative group with the accuracy of 98.41%.Conclusion: the combination of SVM classifier with linear features can be useful to show the relation between creativity and physiological changes.

  19. The Personality Assessment Inventory as a proxy for the Psychopathy Checklist Revised: testing the incremental validity and cross-sample robustness of the Antisocial Features Scale.

    Science.gov (United States)

    Douglas, Kevin S; Guy, Laura S; Edens, John F; Boer, Douglas P; Hamilton, Jennine

    2007-09-01

    The Personality Assessment Inventory's (PAI's) ability to predict psychopathic personality features, as assessed by the Psychopathy Checklist-Revised (PCL-R), was examined. To investigate whether the PAI Antisocial Features (ANT) Scale and subscales possessed incremental validity beyond other theoretically relevant PAI scales, optimized regression equations were derived in a sample of 281 Canadian federal offenders. ANT, or ANT-Antisocial Behavior (ANT-A), demonstrated unique variance in regression analyses predicting PCL-R total and Factor 2 (Lifestyle Impulsivity and Social Deviance) scores, but only the Dominance (DOM) Scale was retained in models predicting Factor 1 (Interpersonal and Affective Deficits). Attempts to cross-validate the regression equations derived from the first sample on a sample of 85 U.S. sex offenders resulted in considerable validity shrinkage, with the ANT Scale in isolation performing comparably to or better than the statistical models for PCL-R total and Factor 2 scores. Results offer limited evidence of convergent validity between the PAI and the PCL-R.

  20. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  1. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  2. USING COMBINATION OF PLANAR AND HEIGHT FEATURES FOR DETECTING BUILT-UP AREAS FROM HIGH-RESOLUTION STEREO IMAGERY

    Directory of Open Access Journals (Sweden)

    F. Peng

    2017-09-01

    Full Text Available Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM. Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3 can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM and digital orthophoto map (DOM are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data. The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  3. Using Combination of Planar and Height Features for Detecting Built-Up Areas from High-Resolution Stereo Imagery

    Science.gov (United States)

    Peng, F.; Cai, X.; Tan, W.

    2017-09-01

    Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.

  4. Bound feature combinations in visual short-term memory are fragile but influence long-term learning

    NARCIS (Netherlands)

    Logie, R.H.; Brockmole, J.R.; Vandenbroucke, A.R.E.

    2009-01-01

    We explored whether individual features and bindings between those features in VSTM tasks are completely lost from trial to trial or whether residual memory traces for these features and bindings are retained in long-term memory. Memory for arrays of coloured shapes was assessed using change

  5. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    Science.gov (United States)

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  6. Robust and Adaptive Control With Aerospace Applications

    CERN Document Server

    Lavretsky, Eugene

    2013-01-01

    Robust and Adaptive Control shows the reader how to produce consistent and accurate controllers that operate in the presence of uncertainties and unforeseen events. Driven by aerospace applications the focus of the book is primarily on continuous-dynamical systems.  The text is a three-part treatment, beginning with robust and optimal linear control methods and moving on to a self-contained presentation of the design and analysis of model reference adaptive control (MRAC) for nonlinear uncertain dynamical systems. Recent extensions and modifications to MRAC design are included, as are guidelines for combining robust optimal and MRAC controllers. Features of the text include: ·         case studies that demonstrate the benefits of robust and adaptive control for piloted, autonomous and experimental aerial platforms; ·         detailed background material for each chapter to motivate theoretical developments; ·         realistic examples and simulation data illustrating key features ...

  7. A novel method combining cellular neural networks and the coupled nonlinear oscillators' paradigm involving a related bifurcation analysis for robust image contrast enhancement in dynamically changing difficult visual environments

    International Nuclear Information System (INIS)

    Chedjou, Jean Chamberlain; Kyamakya, Kyandoghere

    2010-01-01

    It is well known that a machine vision-based analysis of a dynamic scene, for example in the context of advanced driver assistance systems (ADAS), does require real-time processing capabilities. Therefore, the system used must be capable of performing both robust and ultrafast analyses. Machine vision in ADAS must fulfil the above requirements when dealing with a dynamically changing visual context (i.e. driving in darkness or in a foggy environment, etc). Among the various challenges related to the analysis of a dynamic scene, this paper focuses on contrast enhancement, which is a well-known basic operation to improve the visual quality of an image (dynamic or static) suffering from poor illumination. The key objective is to develop a systematic and fundamental concept for image contrast enhancement that should be robust despite a dynamic environment and that should fulfil the real-time constraints by ensuring an ultrafast analysis. It is demonstrated that the new approach developed in this paper is capable of fulfilling the expected requirements. The proposed approach combines the good features of the 'coupled oscillators'-based signal processing paradigm with the good features of the 'cellular neural network (CNN)'-based one. The first paradigm in this combination is the 'master system' and consists of a set of coupled nonlinear ordinary differential equations (ODEs) that are (a) the so-called 'van der Pol oscillator' and (b) the so-called 'Duffing oscillator'. It is then implemented or realized on top of a 'slave system' platform consisting of a CNN-processors platform. An offline bifurcation analysis is used to find out, a priori, the windows of parameter settings in which the coupled oscillator system exhibits the best and most appropriate behaviours of interest for an optimal resulting image processing quality. In the frame of the extensive bifurcation analysis carried out, analytical formulae have been derived, which are capable of determining the various

  8. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid

  9. A Quantum Hybrid PSO Combined with Fuzzy k-NN Approach to Feature Selection and Cell Classification in Cervical Cancer Detection

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2017-12-01

    Full Text Available A quantum hybrid (QH intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO method with the intuitionistic rationality of traditional fuzzy k-nearest neighbours (Fuzzy k-NN algorithm (known simply as the Q-Fuzzy approach is proposed for efficient feature selection and classification of cells in cervical smeared (CS images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection and another hybrid technique combining the standard PSO algorithm with the Fuzzy k-NN technique (P-Fuzzy approach. In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k-NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.

  10. Robust factorization

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Fisker, Rune; Åström, Kalle

    2002-01-01

    Factorization algorithms for recovering structure and motion from an image stream have many advantages, but they usually require a set of well-tracked features. Such a set is in generally not available in practical applications. There is thus a need for making factorization algorithms deal effect...

  11. Clinical features, anti-cancer treatments and outcomes of lung cancer patients with combined pulmonary fibrosis and emphysema.

    Science.gov (United States)

    Minegishi, Yuji; Kokuho, Nariaki; Miura, Yukiko; Matsumoto, Masaru; Miyanaga, Akihiko; Noro, Rintaro; Saito, Yoshinobu; Seike, Masahiro; Kubota, Kaoru; Azuma, Arata; Kida, Kouzui; Gemma, Akihiko

    2014-08-01

    Combined pulmonary fibrosis and emphysema (CPFE) patients may be at significantly increased risk of lung cancer compared with either isolated emphysema or pulmonary fibrosis patients. Acute exacerbation (AE) of interstitial lung disease caused by anticancer treatment is the most common lethal complication in Japanese lung cancer patients. Nevertheless, the clinical significance of CPFE compared with isolated idiopathic interstitial pneumonias (IIPs) in patients with lung cancer is not well understood. A total of 1536 patients with lung cancer at Nippon Medical School Hospital between March 1998 and October 2011 were retrospectively reviewed. Patients with IIPs were categorized into two groups: (i) CPFE; IIP patients with definite emphysema and (ii) non-CPFE; isolated IIP patients without definite emphysema. The clinical features, anti-cancer treatments and outcomes of the CPFE group were compared with those of the non-CPFE group. CPFE and isolated IIPs were identified in 88 (5.7%) and 63 (4.1%) patients respectively, with lung cancer. AE associated with initial treatment occurred in 22 (25.0%) patients in the CPFE group and in 8 (12.7%) patients in the non-CPFE group, irrespective of treatment modality. Median overall survival (OS) of the CPFE group was 23.7 months and that of the non-CPFE group was 20.3 months (P=0.627). Chemotherapy was performed in a total of 83 patients. AE associated with chemotherapy for advanced lung cancer occurred in 6 (13.6%) patients in the CPFE group and 5 (12.8%) patients in the non-CPFE group. Median OS of the CPFE group was 14.9 months and that of the non-CPFE group was 21.6 months (P=0.679). CPFE was not an independent risk factor for AE and was not an independent prognosis factor in lung cancer patients with IIPs. Therefore, great care must be exercised with CPFE as well as IIP patients when performing anticancer treatment for patients with lung cancer. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Initial insights on the performances and management of dairy cattle herds combining two breeds with contrasting features.

    Science.gov (United States)

    Magne, M A; Thénard, V; Mihout, S

    2016-05-01

    Finding ways of increasing animal production with low external inputs and without compromising reproductive performances is a key issue of livestock systems sustainability. One way is to take advantage of the diversity and interactions among components within livestock systems. Among studies that investigate the influence of differences in animals' individual abilities in a herd, few focus on combinations of cow breeds with contrasting features in dairy cattle herds. This study aimed to analyse the performances and management of such multi-breed dairy cattle herds. These herds were composed of two types of dairy breeds: 'specialist' (Holstein) and 'generalist' (e.g. Montbeliarde, Simmental, etc.). Based on recorded milk data in southern French region, we performed (i) to compare the performances of dairy herds according to breed-type composition: multi-breed, single specialist breed or single generalist breed and (ii) to test the difference of milk performances of specialist and generalist breed cows (n = 10 682) per multi-breed dairy herd within a sample of 22 farms. The sampled farmers were also interviewed to characterise herd management through multivariate analysis. Multi-breed dairy herds had a better trade-off among milk yield, milk fat and protein contents, herd reproduction and concentrate-conversion efficiency than single-breed herds. Conversely, they did not offer advantages in terms of milk prices and udder health. Compared to specialist dairy herds, they produce less milk with the same concentrate-conversion efficiency but have better reproductive performances. Compared to generalist dairy herds, they produce more milk with better concentrate-conversion efficiency but have worse reproductive performances. Within herds, specialist and generalist breed cows significantly differed in milk performances, showing their complementarity. The former produced more milk for a longer lactation length while the latter produced milk with higher protein and fat

  13. Combination of radiological and gray level co-occurrence matrix textural features used to distinguish solitary pulmonary nodules by computed tomography.

    Science.gov (United States)

    Wu, Haifeng; Sun, Tao; Wang, Jingjing; Li, Xia; Wang, Wei; Huo, Da; Lv, Pingxin; He, Wen; Wang, Keyang; Guo, Xiuhua

    2013-08-01

    The objective of this study was to investigate the method of the combination of radiological and textural features for the differentiation of malignant from benign solitary pulmonary nodules by computed tomography. Features including 13 gray level co-occurrence matrix textural features and 12 radiological features were extracted from 2,117 CT slices, which came from 202 (116 malignant and 86 benign) patients. Lasso-type regularization to a nonlinear regression model was applied to select predictive features and a BP artificial neural network was used to build the diagnostic model. Eight radiological and two textural features were obtained after the Lasso-type regularization procedure. Twelve radiological features alone could reach an area under the ROC curve (AUC) of 0.84 in differentiating between malignant and benign lesions. The 10 selected characters improved the AUC to 0.91. The evaluation results showed that the method of selecting radiological and textural features appears to yield more effective in the distinction of malignant from benign solitary pulmonary nodules by computed tomography.

  14. Radiation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase 18F-FET PET accuracy without dynamic scans.

    Science.gov (United States)

    Lohmann, Philipp; Stoffels, Gabriele; Ceccon, Garry; Rapp, Marion; Sabel, Michael; Filss, Christian P; Kamp, Marcel A; Stegmayr, Carina; Neumaier, Bernd; Shah, Nadim J; Langen, Karl-Josef; Galldiks, Norbert

    2017-07-01

    We investigated the potential of textural feature analysis of O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic 18 F-FET PET. Tumour-to-brain ratios (TBRs) of 18 F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR mean alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR max alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR max . Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic 18 F-FET PET scans. • Textural feature analysis provides quantitative information about tumour heterogeneity • Textural features help improve discrimination between brain metastasis recurrence and radiation injury • Textural features might be helpful to further understand tumour heterogeneity • Analysis does not require a more time consuming dynamic PET acquisition.

  15. Sequential Modulations in a Combined Horizontal and Vertical Simon Task: Is There ERP Evidence for Feature Integration Effects?

    Science.gov (United States)

    Hoppe, Katharina; Küper, Kristina; Wascher, Edmund

    2017-01-01

    In the Simon task, participants respond faster when the task-irrelevant stimulus position and the response position are corresponding, for example on the same side, compared to when they have a non-corresponding relation. Interestingly, this Simon effect is reduced after non-corresponding trials. Such sequential effects can be explained in terms of a more focused processing of the relevant stimulus dimension due to increased cognitive control, which transfers from the previous non-corresponding trial (conflict adaptation effects). Alternatively, sequential modulations of the Simon effect can also be due to the degree of trial-to-trial repetitions and alternations of task features, which is confounded with the correspondence sequence (feature integration effects). In the present study, we used a spatially two-dimensional Simon task with vertical response keys to examine the contribution of adaptive cognitive control and feature integration processes to the sequential modulation of the Simon effect. The two-dimensional Simon task creates correspondences in the vertical as well as in the horizontal dimension. A trial-by-trial alternation of the spatial dimension, for example from a vertical to a horizontal stimulus presentation, generates a subset containing no complete repetitions of task features, but only complete alternations and partial repetitions, which are equally distributed over all correspondence sequences. In line with the assumed feature integration effects, we found sequential modulations of the Simon effect only when the spatial dimension repeated. At least for the horizontal dimension, this pattern was confirmed by the parietal P3b, an event-related potential that is assumed to reflect stimulus-response link processes. Contrary to conflict adaptation effects, cognitive control, measured by the fronto-central N2 component of the EEG, was not sequentially modulated. Overall, our data provide behavioral as well as electrophysiological evidence for feature

  16. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Combining Landform Thematic Layer and Object-Oriented Image Analysis to Map the Surface Features of Mountainous Flood Plain Areas

    Science.gov (United States)

    Chuang, H.-K.; Lin, M.-L.; Huang, W.-C.

    2012-04-01

    The Typhoon Morakot on August 2009 brought more than 2,000 mm of cumulative rainfall in southern Taiwan, the extreme rainfall event caused serious damage to the Kaoping River basin. The losses were mostly blamed on the landslides along sides of the river, and shifting of the watercourse even led to the failure of roads and bridges, as well as flooding and levees damage happened around the villages on flood bank and terraces. Alluvial fans resulted from debris flow of stream feeders blocked the main watercourse and debris dam was even formed and collapsed. These disasters have highlighted the importance of identification and map the watercourse alteration, surface features of flood plain area and artificial structures soon after the catastrophic typhoon event for natural hazard mitigation. Interpretation of remote sensing images is an efficient approach to acquire spatial information for vast areas, therefore making it suitable for the differentiation of terrain and objects near the vast flood plain areas in a short term. The object-oriented image analysis program (Definiens Developer 7.0) and multi-band high resolution satellite images (QuickBird, DigitalGlobe) was utilized to interpret the flood plain features from Liouguei to Baolai of the the Kaoping River basin after Typhoon Morakot. Object-oriented image interpretation is the process of using homogenized image blocks as elements instead of pixels for different shapes, textures and the mutual relationships of adjacent elements, as well as categorized conditions and rules for semi-artificial interpretation of surface features. Digital terrain models (DTM) are also employed along with the above process to produce layers with specific "landform thematic layers". These layers are especially helpful in differentiating some confusing categories in the spectrum analysis with improved accuracy, such as landslides and riverbeds, as well as terraces, riverbanks, which are of significant engineering importance in disaster

  18. Radiation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase {sup 18}F-FET PET accuracy without dynamic scans

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Philipp; Stoffels, Gabriele; Stegmayr, Carina; Neumaier, Bernd [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); Ceccon, Garry [University of Cologne, Department of Neurology, Cologne (Germany); Rapp, Marion; Sabel, Michael; Kamp, Marcel A. [Heinrich Heine University Duesseldorf, Department of Neurosurgery, Duesseldorf (Germany); Filss, Christian P. [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Nuclear Medicine, Aachen (Germany); Shah, Nadim J. [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Neurology, Aachen (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Department of Neurology, Juelich (Germany); Langen, Karl-Josef [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Nuclear Medicine, Aachen (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Department of Neurology, Juelich (Germany); Galldiks, Norbert [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); University of Cologne, Department of Neurology, Cologne (Germany); University of Cologne, Center of Integrated Oncology (CIO), Cologne (Germany)

    2017-07-15

    We investigated the potential of textural feature analysis of O-(2-[{sup 18}F]fluoroethyl)-L-tyrosine ({sup 18}F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic {sup 18}F-FET PET. Tumour-to-brain ratios (TBRs) of {sup 18}F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR{sub mean} alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR{sub max} alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR{sub max}. Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic {sup 18}F-FET PET scans. (orig.)

  19. New flexible origination technology based on electron-beam lithography and its integration into security devices in combination with covert features based on DNA authentication

    Science.gov (United States)

    Drinkwater, John K.; Ryzi, Zbynek; Outwater, Chris S.

    2002-04-01

    Embossed diffractive optically variable devices are becoming increasingly familiar security items on plastic cards, banknotes, security documents and on branded goods and media to protect against counterfeit, protect copyright and to evidence tamper. Equally as this devices become both more widely available there is a pressing requirement for security technology upgrades to keep ahead of technology advances available to potential counterfeiters. This paper describes a new generation electron beam DOVID origination technology particularly suitable for high security applications. Covert marking of security devices is provided using the DNA matrix by creating and verifying unique DNA sequences. This integration of this into practical security features in combination with covert features based on DNA matrix authentication and other more straightforwardly authenticable features to provide multi- technology security solutions will be described.

  20. A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities.

    Science.gov (United States)

    Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah

    2018-02-01

    Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.

  1. Robust Robot Grasp Detection in Multimodal Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Qiang

    2017-01-01

    Full Text Available Accurate robot grasp detection for model free objects plays an important role in robotics. With the development of RGB-D sensors, object perception technology has made great progress. Reach feature expression by the colour and the depth data is a critical problem that needs to be addressed in order to accomplish the grasping task. To solve the problem of data fusion, this paper proposes a convolutional neural networks (CNN based approach combined with regression and classification. In the CNN model, the colour and the depth modal data are deeply fused together to achieve accurate feature expression. Additionally, Welsch function is introduced into the approach to enhance robustness of the training process. Experiment results demonstrates the superiority of the proposed method.

  2. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  3. Features a combined surgical treatment in preserving the quality of life in patients with invasive colorectal cancer

    Directory of Open Access Journals (Sweden)

    S. B. Abduzhapparov

    2016-01-01

    Full Text Available Background. The aim of the work was to study the effect of combined surgical treatment of locally advanced rectal cancer (RC with the invasion of the organs of the female reproductive system on the quality of life of patients.Materials and methods. Presents the diagnosis and treatment of 134 patients with the RC in age from 21 to 70 years, with the invasion of the organs of the female reproductive system. All patients carried the standard clinical and laboratory tests.Results. Half of the patients (50.7 % of cases T4N1M0 stage of the disease has been diagnosed. In 75 (56.0 % patients with tumor spread into the vagina, and in 16 (11.9 % patients – just a few of the reproductive system. In the study group of 64 patients with the RC, along with surgery on the rectum, combined organ-performed surgery reproductive organs. In the control group all 70 patients was performed hysterectomy with appendages.Conclusions. Quality of life according to the questionnaire MENQOL, was significantly higher in patients with organ-treatment, which showed a decrease in vasomotor and psychological symptoms, as well as smoothing of irregularities in the physical and sexual spheres. Studies have show the validity of the widespread introduction in the oncological practice combined simultaneous operations that preserve the reproductive organs in women with invasive RC, which is especially important for women of reproductive age.

  4. Effective prediction of bacterial type IV secreted effectors by combined features of both C-termini and N-termini.

    Science.gov (United States)

    Wang, Yu; Guo, Yanzhi; Pu, Xuemei; Li, Menglong

    2017-11-01

    Various bacterial pathogens can deliver their secreted substrates also called as effectors through type IV secretion systems (T4SSs) into host cells and cause diseases. Since T4SS secreted effectors (T4SEs) play important roles in pathogen-host interactions, identifying them is crucial to our understanding of the pathogenic mechanisms of T4SSs. A few computational methods using machine learning algorithms for T4SEs prediction have been developed by using features of C-terminal residues. However, recent studies have shown that targeting information can also be encoded in the N-terminal region of at least some T4SEs. In this study, we present an effective method for T4SEs prediction by novelly integrating both N-terminal and C-terminal sequence information. First, we collected a comprehensive dataset across multiple bacterial species of known T4SEs and non-T4SEs from literatures. Then, three types of distinctive features, namely amino acid composition, composition, transition and distribution and position-specific scoring matrices were calculated for 50 N-terminal and 100 C-terminal residues. After that, we employed information gain represent to rank the importance score of the 150 different position residues for T4SE secretion signaling. At last, 125 distinctive position residues were singled out for the prediction model to classify T4SEs and non-T4SEs. The support vector machine model yields a high receiver operating curve of 0.916 in the fivefold cross-validation and an accuracy of 85.29% for the independent test set.

  5. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  6. Enhanced Enzymatic Hydrolysis and Structural Features of Corn Stover by NaOH and Ozone Combined Pretreatment

    Directory of Open Access Journals (Sweden)

    Wenhui Wang

    2018-05-01

    Full Text Available A two-step pretreatment using NaOH and ozone was performed to improve the enzymatic hydrolysis, compositions and structural characteristics of corn stover. Comparison between the unpretreated and pretreated corn stover was also made to illustrate the mechanism of the combined pretreatment. A pretreatment with 2% (w/w NaOH at 80 °C for 2 h followed by ozone treatment for 25 min with an initial pH 9 was found to be the optimal procedure and the maximum efficiency (91.73% of cellulose enzymatic hydrolysis was achieved. Furthermore, microscopic observation of changes in the surface structure of the samples showed that holes were formed and lignin and hemicellulose were partially dissolved and removed. X-ray Diffraction (XRD, Fourier Transform Infrared Spectroscopy (FTIR and Cross-Polarization Magic Angle Spinning Carbon-13 Nuclear Magnetic Resonance (CP/MAS 13C-NMR were also used to characterize the chemical structural changes after the combined pretreatment. The results were as follows: part of the cellulose I structure was destroyed and then reformed into cellulose III, the cellulose crystal indices were also changed; a wider space between the crystal layer was observed; disruption of hydrogen bonds in cellulose and disruption of ester bonds in hemicellulose; cleavage of bonds linkage in lignin-carbohydrate complexes; removal of methoxy in lignin and hemicellulose. As a result, all these changes effectively reduced recalcitrance of corn stover and promoted subsequent enzymatic hydrolysis of cellulose.

  7. Numerical Study on Flow, Temperature, and Concentration Distribution Features of Combined Gas and Bottom-Electromagnetic Stirring in a Ladle

    Directory of Open Access Journals (Sweden)

    Yang Li

    2018-01-01

    Full Text Available A novel method of combined argon gas stirring and bottom-rotating electromagnetic stirring in a ladle refining process is presented in this report. A three-dimensional numerical model was adopted to investigate its effect on improving flow field, eliminating temperature stratification, and homogenizing concentration distribution. The results show that the electromagnetic force has a tendency to spiral by spinning clockwise on the horizontal section and straight up along the vertical section, respectively. When the electromagnetic force is applied to the gas-liquid two phase flow, the gas-liquid plume is shifted and the gas-liquid two phase region is extended. The rotated flow driven by the electromagnetic force promotes the scatter of bubbles. The temperature stratification tends to be alleviated due to the effect of heat compensation and the improved flow. The temperature stratification tends to disappear when the current reaches 1200 A. The improved flow field has a positive influence on decreasing concentration stratification and shortening the mixing time when the combined method is imposed. However, the alloy depositing site needs to be optimized according to the whole circulatory flow and the region of bubbles to escape.

  8. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  9. A combined experimental and in silico characterization to highlight additional structural features and properties of a potentially new drug

    Science.gov (United States)

    Bastos, Isadora T. S.; Costa, Fanny N.; Silva, Tiago F.; Barreiro, Eliezer J.; Lima, Lídia M.; Braz, Delson; Lombardo, Giuseppe M.; Punzo, Francesco; Ferreira, Fabio F.; Barroso, Regina C.

    2017-10-01

    LASSBio-1755 is a new cycloalkyl-N-acylhydrazone parent compound designed for the development of derivatives with antinociceptive and anti-inflammatory activities. Although single crystal X-ray diffraction has been considered as the golden standard in structure determination, we successfully used X-ray powder diffraction data in the structural determination of new synthesized compounds, in order to overcome the bottle-neck due to the difficulties experienced in harvesting good quality single crystals of the compounds. We therefore unequivocally assigned the relative configuration (E) to the imine double bond and a s-cis conformation of the amide function of the N-acylhydrazone compound. These features are confirmed by a computational analysis performed on the basis of molecular dynamics calculations, which are extended not only to the structural characteristics but also to the analysis of the anisotropic atomic displacement parameters, a further information - missed in a typical powder diffraction analysis. The so inferred data were used to perform additional cycles of refinement and eventually generate a new cif file with additional physical information. Furthermore, crystal morphology prediction was performed, which is in agreement with the experimental images acquired by scanning electron microscopy, thus providing useful information on possible alternative paths for better crystallization strategies.

  10. Alternative end-joining catalyzes robust IgH locus deletions and translocations in the combined absence of ligase 4 and Ku70.

    Science.gov (United States)

    Boboila, Cristian; Jankovic, Mila; Yan, Catherine T; Wang, Jing H; Wesemann, Duane R; Zhang, Tingting; Fazeli, Alex; Feldman, Lauren; Nussenzweig, Andre; Nussenzweig, Michel; Alt, Frederick W

    2010-02-16

    Class switch recombination (CSR) in B lymphocytes is initiated by introduction of multiple DNA double-strand breaks (DSBs) into switch (S) regions that flank immunoglobulin heavy chain (IgH) constant region exons. CSR is completed by joining a DSB in the donor S mu to a DSB in a downstream acceptor S region (e.g., S gamma1) by end-joining. In normal cells, many CSR junctions are mediated by classical nonhomologous end-joining (C-NHEJ), which employs the Ku70/80 complex for DSB recognition and XRCC4/DNA ligase 4 for ligation. Alternative end-joining (A-EJ) mediates CSR, at reduced levels, in the absence of C-NHEJ, even in combined absence of Ku70 and ligase 4, demonstrating an A-EJ pathway totally distinct from C-NHEJ. Multiple DSBs are introduced into S mu during CSR, with some being rejoined or joined to each other to generate internal switch deletions (ISDs). In addition, S-region DSBs can be joined to other chromosomes to generate translocations, the level of which is increased by absence of a single C-NHEJ component (e.g., XRCC4). We asked whether ISD and S-region translocations occur in the complete absence of C-NHEJ (e.g., in Ku70/ligase 4 double-deficient B cells). We found, unexpectedly, that B-cell activation for CSR generates substantial ISD in both S mu and S gamma1 and that ISD in both is greatly increased by the absence of C-NHEJ. IgH chromosomal translocations to the c-myc oncogene also are augmented in the combined absence of Ku70 and ligase 4. We discuss the implications of these findings for A-EJ in normal and abnormal DSB repair.

  11. Mutational robustness of gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Aalt D J van Dijk

    Full Text Available Mutational robustness of gene regulatory networks refers to their ability to generate constant biological output upon mutations that change network structure. Such networks contain regulatory interactions (transcription factor-target gene interactions but often also protein-protein interactions between transcription factors. Using computational modeling, we study factors that influence robustness and we infer several network properties governing it. These include the type of mutation, i.e. whether a regulatory interaction or a protein-protein interaction is mutated, and in the case of mutation of a regulatory interaction, the sign of the interaction (activating vs. repressive. In addition, we analyze the effect of combinations of mutations and we compare networks containing monomeric with those containing dimeric transcription factors. Our results are consistent with available data on biological networks, for example based on evolutionary conservation of network features. As a novel and remarkable property, we predict that networks are more robust against mutations in monomer than in dimer transcription factors, a prediction for which analysis of conservation of DNA binding residues in monomeric vs. dimeric transcription factors provides indirect evidence.

  12. Development of a nomogram combining clinical staging with 18F-FDG PET/CT image features in non-small-cell lung cancer stage I-III

    International Nuclear Information System (INIS)

    Desseroit, Marie-Charlotte; Visvikis, Dimitris; Majdoub, Mohamed; Hatt, Mathieu; Tixier, Florent; Perdrisot, Remy; Cheze Le Rest, Catherine; Guillevin, Remy

    2016-01-01

    Our goal was to develop a nomogram by exploiting intratumour heterogeneity on CT and PET images from routine 18 F-FDG PET/CT acquisitions to identify patients with the poorest prognosis. This retrospective study included 116 patients with NSCLC stage I, II or III and with staging 18 F-FDG PET/CT imaging. Primary tumour volumes were delineated using the FLAB algorithm and 3D Slicer trademark on PET and CT images, respectively. PET and CT heterogeneities were quantified using texture analysis. The reproducibility of the CT features was assessed on a separate test-retest dataset. The stratification power of the PET/CT features was evaluated using the Kaplan-Meier method and the log-rank test. The best standard metric (functional volume) was combined with the least redundant and most prognostic PET/CT heterogeneity features to build the nomogram. PET entropy and CT zone percentage had the highest complementary values with clinical stage and functional volume. The nomogram improved stratification amongst patients with stage II and III disease, allowing identification of patients with the poorest prognosis (clinical stage III, large tumour volume, high PET heterogeneity and low CT heterogeneity). Intratumour heterogeneity quantified using textural features on both CT and PET images from routine staging 18 F-FDG PET/CT acquisitions can be used to create a nomogram with higher stratification power than staging alone. (orig.)

  13. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  14. Combined value of Virtual Touch tissue quantification and conventional sonographic features for differentiating benign and malignant thyroid nodules smaller than 10 mm.

    Science.gov (United States)

    Zhang, Huiping; Shi, Qiusheng; Gu, Jiying; Jiang, Luying; Bai, Min; Liu, Long; Wu, Ying; Du, Lianfang

    2014-02-01

    This study aimed to investigate the value of sonographic features including Virtual Touch tissue quantification (VTQ; Siemens Medical Solutions, Mountain View, CA) for differentiating benign and malignant thyroid nodules smaller than 10 mm. Seventy-one thyroid nodules smaller than 10 mm with pathologic diagnoses were included in this study. The conventional sonographic features and quantitative elasticity features (VTQ) were observed and compared between benign and malignant nodules. There were 39 benign and 32 malignant nodules according to histopathologic examination. When compared with benign nodules, malignant nodules were more frequently taller than wide, poorly defined, and markedly hypoechoic (P benign and malignant nodules. The VTQ value for malignant nodules (mean ± SD 3.260 ± 0.725 m/s) was significantly higher than that of benign ones (2.108 ± 0.455 m/s; P benign and malignant thyroid nodules smaller than 10 mm. When VTQ was combined with B-mode sonographic features, the sensitivity was improved significantly.

  15. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Science.gov (United States)

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  16. A novel recombinant retrovirus in the genomes of modern birds combines features of avian and mammalian retroviruses.

    Science.gov (United States)

    Henzy, Jamie E; Gifford, Robert J; Johnson, Welkin E; Coffin, John M

    2014-03-01

    Endogenous retroviruses (ERVs) represent ancestral sequences of modern retroviruses or their extinct relatives. The majority of ERVs cluster alongside exogenous retroviruses into two main groups based on phylogenetic analyses of the reverse transcriptase (RT) enzyme. Class I includes gammaretroviruses, and class II includes lentiviruses and alpha-, beta-, and deltaretroviruses. However, analyses of the transmembrane subunit (TM) of the envelope glycoprotein (env) gene result in a different topology for some retroviruses, suggesting recombination events in which heterologous env sequences have been acquired. We previously demonstrated that the TM sequences of five of the six genera of orthoretroviruses can be divided into three types, each of which infects a distinct set of vertebrate classes. Moreover, these classes do not always overlap the host range of the associated RT classes. Thus, recombination resulting in acquisition of a heterologous env gene could in theory facilitate cross-species transmissions across vertebrate classes, for example, from mammals to reptiles. Here we characterized a family of class II avian ERVs, "TgERV-F," that acquired a mammalian gammaretroviral env sequence. Although TgERV-F clusters near a sister clade to alpharetroviruses, its genome also has some features of betaretroviruses. We offer evidence that this unusual recombinant has circulated among several avian orders and may still have infectious members. In addition to documenting the infection of a nongalliform avian species by a mammalian retrovirus, TgERV-F also underscores the importance of env sequences in reconstructing phylogenies and supports a possible role for env swapping in allowing cross-species transmissions across wide taxonomic distances. Retroviruses can sometimes acquire an envelope gene (env) from a distantly related retrovirus. Since env is a key determinant of host range, such an event affects the host range of the recombinant virus and can lead to the creation

  17. PrEP as a feature in the optimal landscape of combination HIV prevention in sub-Saharan Africa.

    Science.gov (United States)

    McGillen, Jessica B; Anderson, Sarah-Jane; Hallett, Timothy B

    2016-01-01

    The new WHO guidelines recommend offering pre-exposure prophylaxis (PrEP) to people who are at substantial risk of HIV infection. However, where PrEP should be prioritised, and for which population groups, remains an open question. The HIV landscape in sub-Saharan Africa features limited prevention resources, multiple options for achieving cost saving, and epidemic heterogeneity. This paper examines what role PrEP should play in optimal prevention in this complex and dynamic landscape. We use a model that was previously developed to capture subnational HIV transmission in sub-Saharan Africa. With this model, we can consider how prevention funds could be distributed across and within countries throughout sub-Saharan Africa to enable optimal HIV prevention (that is, avert the greatest number of infections for the lowest cost). Here, we focus on PrEP to elucidate where, and to whom, it would optimally be offered in portfolios of interventions (alongside voluntary medical male circumcision, treatment as prevention, and behaviour change communication). Over a range of continental expenditure levels, we use our model to explore prevention patterns that incorporate PrEP, exclude PrEP, or implement PrEP according to a fixed incidence threshold. At low-to-moderate levels of total prevention expenditure, we find that the optimal intervention portfolios would include PrEP in only a few regions and primarily for female sex workers (FSW). Prioritisation of PrEP would expand with increasing total expenditure, such that the optimal prevention portfolios would offer PrEP in more subnational regions and increasingly for men who have sex with men (MSM) and the lower incidence general population. The marginal benefit of including PrEP among the available interventions increases with overall expenditure by up to 14% (relative to excluding PrEP). The minimum baseline incidence for the optimal offer of PrEP declines for all population groups as expenditure increases. We find that using

  18. Comparing Four Instructional Techniques for Promoting Robust Knowledge

    Science.gov (United States)

    Richey, J. Elizabeth; Nokes-Malach, Timothy J.

    2015-01-01

    Robust knowledge serves as a common instructional target in academic settings. Past research identifying characteristics of experts' knowledge across many domains can help clarify the features of robust knowledge as well as ways of assessing it. We review the expertise literature and identify three key features of robust knowledge (deep,…

  19. First human hNT neurons patterned on parylene-C/silicon dioxide substrates: Combining an accessible cell line and robust patterning technology for the study of the pathological adult human brain.

    Science.gov (United States)

    Unsworth, C P; Graham, E S; Delivopoulos, E; Dragunow, M; Murray, A F

    2010-12-15

    In this communication, we describe a new method which has enabled the first patterning of human neurons (derived from the human teratocarcinoma cell line (hNT)) on parylene-C/silicon dioxide substrates. We reveal the details of the nanofabrication processes, cell differentiation and culturing protocols necessary to successfully pattern hNT neurons which are each key aspects of this new method. The benefits in patterning human neurons on silicon chip using an accessible cell line and robust patterning technology are of widespread value. Thus, using a combined technology such as this will facilitate the detailed study of the pathological human brain at both the single cell and network level. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  1. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  2. Automatic gallbladder segmentation using combined 2D and 3D shape features to perform volumetric analysis in native and secretin-enhanced MRCP sequences.

    Science.gov (United States)

    Gloger, Oliver; Bülow, Robin; Tönnies, Klaus; Völzke, Henry

    2017-11-24

    We aimed to develop the first fully automated 3D gallbladder segmentation approach to perform volumetric analysis in volume data of magnetic resonance (MR) cholangiopancreatography (MRCP) sequences. Volumetric gallbladder analysis is performed for non-contrast-enhanced and secretin-enhanced MRCP sequences. Native and secretin-enhanced MRCP volume data were produced with a 1.5-T MR system. Images of coronal maximum intensity projections (MIP) are used to automatically compute 2D characteristic shape features of the gallbladder in the MIP images. A gallbladder shape space is generated to derive 3D gallbladder shape features, which are then combined with 2D gallbladder shape features in a support vector machine approach to detect gallbladder regions in MRCP volume data. A region-based level set approach is used for fine segmentation. Volumetric analysis is performed for both sequences to calculate gallbladder volume differences between both sequences. The approach presented achieves segmentation results with mean Dice coefficients of 0.917 in non-contrast-enhanced sequences and 0.904 in secretin-enhanced sequences. This is the first approach developed to detect and segment gallbladders in MR-based volume data automatically in both sequences. It can be used to perform gallbladder volume determination in epidemiological studies and to detect abnormal gallbladder volumes or shapes. The positive volume differences between both sequences may indicate the quantity of the pancreatobiliary reflux.

  3. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  4. Myoinositol combined with alpha-lipoic acid may improve the clinical and endocrine features of polycystic ovary syndrome through an insulin-independent action.

    Science.gov (United States)

    De Cicco, Simona; Immediata, Valentina; Romualdi, Daniela; Policola, Caterina; Tropea, Anna; Di Florio, Christian; Tagliaferri, Valeria; Scarinci, Elisa; Della Casa, Silvia; Lanzone, Antonio; Apa, Rosanna

    2017-09-01

    The aim of our study was to investigate the effects of a combined treatment with alpha-lipoic acid (ALA) and myoinositol (MYO) on clinical, endocrine and metabolic features of women affected by polycystic ovary syndrome (PCOS). In this pilot cohort study, forty women with PCOS were enrolled and clinical, hormonal and metabolic parameters were evaluated before and after a six-months combined treatment with ALA and MYO daily. Studied patients experienced a significant increase in the number of cycles in six months (p < 0.01). The free androgen index (FAI), the mean androstenedione and DHEAS levels significantly decreased after treatment (p < 0.05). Mean SHBG levels significantly raised (p < 0.01). A significant improvement in mean Ferriman-Gallwey (F-G) score (p < 0.01) and a significant reduction of BMI (p < 0.01) were also observed. A significant reduction of AMH levels, ovarian volume and total antral follicular count were observed in our studied women (p< 0.05). No significant changes occurred in gluco-insulinaemic and lipid parameters after treatment. The combined treatment of ALA and MYO is able to restore the menstrual pattern and to improve the hormonal milieu of PCOS women, even in the absence of apparent changes in insulin metabolism.

  5. Robust Protection against Highly Virulent Foot-and-Mouth Disease Virus in Swine by Combination Treatment with Recombinant Adenoviruses Expressing Porcine Alpha and Gamma Interferons and Multiple Small Interfering RNAs

    Science.gov (United States)

    Park, Jong-Hyeon; Lee, Kwang-Nyeong; Kim, Se-Kyung; You, Su-Hwa; Kim, Taeseong; Tark, Dongseob; Lee, Hyang-Sim; Seo, Min-Goo; Kim, Byounghan

    2015-01-01

    ABSTRACT Because the currently available vaccines against foot-and-mouth disease (FMD) provide no protection until 4 to 7 days postvaccination, the only alternative method to halt the spread of the FMD virus (FMDV) during outbreaks is the application of antiviral agents. Combination treatment strategies have been used to enhance the efficacy of antiviral agents, and such strategies may be advantageous in overcoming viral mechanisms of resistance to antiviral treatments. We have developed recombinant adenoviruses (Ads) for the simultaneous expression of porcine alpha and gamma interferons (Ad-porcine IFN-αγ) as well as 3 small interfering RNAs (Ad-3siRNA) targeting FMDV mRNAs encoding nonstructural proteins. The antiviral effects of Ad-porcine IFN-αγ and Ad-3siRNA expression were tested in combination in porcine cells, suckling mice, and swine. We observed enhanced antiviral effects in porcine cells and mice as well as robust protection against the highly pathogenic strain O/Andong/SKR/2010 and increased expression of cytokines in swine following combination treatment. In addition, we showed that combination treatment was effective against all serotypes of FMDV. Therefore, we suggest that the combined treatment with Ad-porcine IFN-αγ and Ad-3siRNA may offer fast-acting antiviral protection and be used with a vaccine during the period that the vaccine does not provide protection against FMD. IMPORTANCE The use of current foot-and-mouth disease (FMD) vaccines to induce rapid protection provides limited effectiveness because the protection does not become effective until a minimum of 4 days after vaccination. Therefore, during outbreaks antiviral agents remain the only available treatment to confer rapid protection and reduce the spread of foot-and-mouth disease virus (FMDV) in livestock until vaccine-induced protective immunity can become effective. Interferons (IFNs) and small interfering RNAs (siRNAs) have been reported to be effective antiviral agents against

  6. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    Science.gov (United States)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  7. Multimodel Robust Control for Hydraulic Turbine

    OpenAIRE

    Osuský, Jakub; Števo, Stanislav

    2014-01-01

    The paper deals with the multimodel and robust control system design and their combination based on M-Δ structure. Controller design will be done in the frequency domain with nominal performance specified by phase margin. Hydraulic turbine model is analyzed as system with unstructured uncertainty, and robust stability condition is included in controller design. Multimodel and robust control approaches are presented in detail on hydraulic turbine model. Control design approaches are compared a...

  8. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  9. Feature Fusion of ICP-AES, UV-Vis and FT-MIR for Origin Traceability of Boletus edulis Mushrooms in Combination with Chemometrics.

    Science.gov (United States)

    Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao; Wang, Yuanzhong

    2018-01-15

    Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 192 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms.

  10. Feature Fusion of ICP-AES, UV-Vis and FT-MIR for Origin Traceability of Boletus edulis Mushrooms in Combination with Chemometrics

    Directory of Open Access Journals (Sweden)

    Luming Qi

    2018-01-01

    Full Text Available Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES, ultraviolet-visible (UV-Vis and Fourier transform mid-infrared spectroscopy (FT-MIR were applied for the origin traceability of 184 mushroom samples (caps and stipes in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA and grid search support vector machine (GS-SVM, were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms.

  11. A robust probabilistic collaborative representation based classification for multimodal biometrics

    Science.gov (United States)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  12. Robust balance shift control with posture optimization

    NARCIS (Netherlands)

    Kavafoglu, Z.; Kavafoglu, Ersan; Egges, J.

    2015-01-01

    In this paper we present a control framework which creates robust and natural balance shifting behaviours during standing. Given high-level features such as the position of the center of mass projection and the foot configurations, a kinematic posture satisfying these features is synthesized using

  13. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  14. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  15. Features for detecting smoke in laparoscopic videos

    Directory of Open Access Journals (Sweden)

    Jalal Nour Aldeen

    2017-09-01

    Full Text Available Video-based smoke detection in laparoscopic surgery has different potential applications, such as the automatic addressing of surgical events associated with the electrocauterization task and the development of automatic smoke removal. In the literature, video-based smoke detection has been studied widely for fire surveillance systems. Nevertheless, the proposed methods are insufficient for smoke detection in laparoscopic videos because they often depend on assumptions which rarely hold in laparoscopic surgery such as static camera. In this paper, ten visual features based on motion, texture and colour of smoke are proposed and evaluated for smoke detection in laparoscopic videos. These features are RGB channels, energy-based feature, texture features based on gray level co-occurrence matrix (GLCM, HSV colour space feature, features based on the detection of moving regions using optical flow and the smoke colour in HSV colour space. These features were tested on four laparoscopic cholecystectomy videos. Experimental observations show that each feature can provide valuable information in performing the smoke detection task. However, each feature has weaknesses to detect the presence of smoke in some cases. By combining all proposed features smoke with high and even low density can be identified robustly and the classification accuracy increases significantly.

  16. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  17. Robust control design with MATLAB

    CERN Document Server

    Gu, Da-Wei; Konstantinov, Mihail M

    2013-01-01

    Robust Control Design with MATLAB® (second edition) helps the student to learn how to use well-developed advanced robust control design methods in practical cases. To this end, several realistic control design examples from teaching-laboratory experiments, such as a two-wheeled, self-balancing robot, to complex systems like a flexible-link manipulator are given detailed presentation. All of these exercises are conducted using MATLAB® Robust Control Toolbox 3, Control System Toolbox and Simulink®. By sharing their experiences in industrial cases with minimum recourse to complicated theories and formulae, the authors convey essential ideas and useful insights into robust industrial control systems design using major H-infinity optimization and related methods allowing readers quickly to move on with their own challenges. The hands-on tutorial style of this text rests on an abundance of examples and features for the second edition: ·        rewritten and simplified presentation of theoretical and meth...

  18. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  19. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  20. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  1. Efficient robust conditional random fields.

    Science.gov (United States)

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  2. Triple Arterial Phase MR Imaging with Gadoxetic Acid Using a Combination of Contrast Enhanced Time Robust Angiography, Keyhole, and Viewsharing Techniques and Two-Dimensional Parallel Imaging in Comparison with Conventional Single Arterial Phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Lee, Jeong Min [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of); Yu, Mi Hye [Department of Radiology, Konkuk University Medical Center, Seoul 05030 (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul 04342 (Korea, Republic of); Han, Joon Koo [Department of Radiology, Seoul National University Hospital, Seoul 03080 (Korea, Republic of); Department of Radiology, Seoul National University College of Medicine, Seoul 03087 (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 03087 (Korea, Republic of)

    2016-11-01

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ{sup 2} test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  3. Triple arterial phase MR imaging with gadoxetic acid using a combination of contrast enhanced time robust angiography, keyhole, and viewsharing techniques and two-dimensional parallel imaging in comparison with conventional single arterial phase

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jeong Hee; Lee, Jeong Min; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of); Yu, Mi Hye [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Kim, Eun Ju [Philips Healthcare Korea, Seoul (Korea, Republic of)

    2016-07-15

    To determine whether triple arterial phase acquisition via a combination of Contrast Enhanced Time Robust Angiography, keyhole, temporal viewsharing and parallel imaging can improve arterial phase acquisition with higher spatial resolution than single arterial phase gadoxetic-acid enhanced magnetic resonance imaging (MRI). Informed consent was waived for this retrospective study by our Institutional Review Board. In 752 consecutive patients who underwent gadoxetic acid-enhanced liver MRI, either single (n = 587) or triple (n = 165) arterial phases was obtained in a single breath-hold under MR fluoroscopy guidance. Arterial phase timing was assessed, and the degree of motion was rated on a four-point scale. The percentage of patients achieving the late arterial phase without significant motion was compared between the two methods using the χ2 test. The late arterial phase was captured at least once in 96.4% (159/165) of the triple arterial phase group and in 84.2% (494/587) of the single arterial phase group (p < 0.001). Significant motion artifacts (score ≤ 2) were observed in 13.3% (22/165), 1.2% (2/165), 4.8% (8/165) on 1st, 2nd, and 3rd scans of triple arterial phase acquisitions and 6.0% (35/587) of single phase acquisitions. Thus, the late arterial phase without significant motion artifacts was captured in 96.4% (159/165) of the triple arterial phase group and in 79.9% (469/587) of the single arterial phase group (p < 0.001). Triple arterial phase imaging may reliably provide adequate arterial phase imaging for gadoxetic acid-enhanced liver MRI.

  4. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  5. A preclinical orthotopic model for glioblastoma recapitulates key features of human tumors and demonstrates sensitivity to a combination of MEK and PI3K pathway inhibitors.

    Science.gov (United States)

    El Meskini, Rajaa; Iacovelli, Anthony J; Kulaga, Alan; Gumprecht, Michelle; Martin, Philip L; Baran, Maureen; Householder, Deborah B; Van Dyke, Terry; Weaver Ohler, Zoë

    2015-01-01

    Current therapies for glioblastoma multiforme (GBM), the highest grade malignant brain tumor, are mostly ineffective, and better preclinical model systems are needed to increase the successful translation of drug discovery efforts into the clinic. Previous work describes a genetically engineered mouse (GEM) model that contains perturbations in the most frequently dysregulated networks in GBM (driven by RB, KRAS and/or PI3K signaling and PTEN) that induce development of Grade IV astrocytoma with properties of the human disease. Here, we developed and characterized an orthotopic mouse model derived from the GEM that retains the features of the GEM model in an immunocompetent background; however, this model is also tractable and efficient for preclinical evaluation of candidate therapeutic regimens. Orthotopic brain tumors are highly proliferative, invasive and vascular, and express histology markers characteristic of human GBM. Primary tumor cells were examined for sensitivity to chemotherapeutics and targeted drugs. PI3K and MAPK pathway inhibitors, when used as single agents, inhibited cell proliferation but did not result in significant apoptosis. However, in combination, these inhibitors resulted in a substantial increase in cell death. Moreover, these findings translated into the in vivo orthotopic model: PI3K or MAPK inhibitor treatment regimens resulted in incomplete pathway suppression and feedback loops, whereas dual treatment delayed tumor growth through increased apoptosis and decreased tumor cell proliferation. Analysis of downstream pathway components revealed a cooperative effect on target downregulation. These concordant results, together with the morphologic similarities to the human GBM disease characteristics of the model, validate it as a new platform for the evaluation of GBM treatment. © 2015. Published by The Company of Biologists Ltd.

  6. A preclinical orthotopic model for glioblastoma recapitulates key features of human tumors and demonstrates sensitivity to a combination of MEK and PI3K pathway inhibitors

    Directory of Open Access Journals (Sweden)

    Rajaa El Meskini

    2015-01-01

    Full Text Available Current therapies for glioblastoma multiforme (GBM, the highest grade malignant brain tumor, are mostly ineffective, and better preclinical model systems are needed to increase the successful translation of drug discovery efforts into the clinic. Previous work describes a genetically engineered mouse (GEM model that contains perturbations in the most frequently dysregulated networks in GBM (driven by RB, KRAS and/or PI3K signaling and PTEN that induce development of Grade IV astrocytoma with properties of the human disease. Here, we developed and characterized an orthotopic mouse model derived from the GEM that retains the features of the GEM model in an immunocompetent background; however, this model is also tractable and efficient for preclinical evaluation of candidate therapeutic regimens. Orthotopic brain tumors are highly proliferative, invasive and vascular, and express histology markers characteristic of human GBM. Primary tumor cells were examined for sensitivity to chemotherapeutics and targeted drugs. PI3K and MAPK pathway inhibitors, when used as single agents, inhibited cell proliferation but did not result in significant apoptosis. However, in combination, these inhibitors resulted in a substantial increase in cell death. Moreover, these findings translated into the in vivo orthotopic model: PI3K or MAPK inhibitor treatment regimens resulted in incomplete pathway suppression and feedback loops, whereas dual treatment delayed tumor growth through increased apoptosis and decreased tumor cell proliferation. Analysis of downstream pathway components revealed a cooperative effect on target downregulation. These concordant results, together with the morphologic similarities to the human GBM disease characteristics of the model, validate it as a new platform for the evaluation of GBM treatment.

  7. Robust speaker recognition in noisy environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book discusses speaker recognition methods to deal with realistic variable noisy environments. The text covers authentication systems for; robust noisy background environments, functions in real time and incorporated in mobile devices. The book focuses on different approaches to enhance the accuracy of speaker recognition in presence of varying background environments. The authors examine: (a) Feature compensation using multiple background models, (b) Feature mapping using data-driven stochastic models, (c) Design of super vector- based GMM-SVM framework for robust speaker recognition, (d) Total variability modeling (i-vectors) in a discriminative framework and (e) Boosting method to fuse evidences from multiple SVM models.

  8. Exploiting Higher Order and Multi-modal Features for 3D Object Detection

    DEFF Research Database (Denmark)

    Kiforenko, Lilita

    that describe object visual appearance such as shape, colour, texture etc. This thesis focuses on robust object detection and pose estimation of rigid objects using 3D information. The thesis main contributions are novel feature descriptors together with object detection and pose estimation algorithms....... The initial work introduces a feature descriptor that uses edge categorisation in combination with a local multi-modal histogram descriptor in order to detect objects with little or no texture or surface variation. The comparison is performed with a state-of-the-art method, which is outperformed...... of the methods work well for one type of objects in a specific scenario, in another scenario or with different objects they might fail, therefore more robust solutions are required. The typical problem solution is the design of robust feature descriptors, where feature descriptors contain information...

  9. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  10. Robust tumor morphometry in multispectral fluorescence microscopy

    Science.gov (United States)

    Tabesh, Ali; Vengrenyuk, Yevgen; Teverovskiy, Mikhail; Khan, Faisal M.; Sapir, Marina; Powell, Douglas; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo

    2009-02-01

    Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of 1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for predicting cancer recurrence (p <= 0.0001). In multivariate analysis, an MST feature was selected for a model incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set, which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.

  11. Robust surface registration using N-points approximate congruent sets

    Directory of Open Access Journals (Sweden)

    Yao Jian

    2011-01-01

    Full Text Available Abstract Scans acquired by 3D sensors are typically represented in a local coordinate system. When multiple scans, taken from different locations, represent the same scene these must be registered to a common reference frame. We propose a fast and robust registration approach to automatically align two scans by finding two sets of N-points, that are approximately congruent under rigid transformation and leading to a good estimate of the transformation between their corresponding point clouds. Given two scans, our algorithm randomly searches for the best sets of congruent groups of points using a RANSAC-based approach. To successfully and reliably align two scans when there is only a small overlap, we improve the basic RANSAC random selection step by employing a weight function that approximates the probability of each pair of points in one scan to match one pair in the other. The search time to find pairs of congruent sets of N-points is greatly reduced by employing a fast search codebook based on both binary and multi-dimensional lookup tables. Moreover, we introduce a novel indicator of the overlapping region quality which is used to verify the estimated rigid transformation and to improve the alignment robustness. Our framework is general enough to incorporate and efficiently combine different point descriptors derived from geometric and texture-based feature points or scene geometrical characteristics. We also present a method to improve the matching effectiveness of texture feature descriptors by extracting them from an atlas of rectified images recovered from the scan reflectance image. Our algorithm is robust with respect to different sampling densities and also resilient to noise and outliers. We demonstrate its robustness and efficiency on several challenging scan datasets with varying degree of noise, outliers, extent of overlap, acquired from indoor and outdoor scenarios.

  12. Feature hashing for fast image retrieval

    Science.gov (United States)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  13. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  14. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  15. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  16. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  17. Design Robust Controller for Rotary Kiln

    Directory of Open Access Journals (Sweden)

    Omar D. Hernández-Arboleda

    2013-11-01

    Full Text Available This paper presents the design of a robust controller for a rotary kiln. The designed controller is a combination of a fractional PID and linear quadratic regulator (LQR, these are not used to control the kiln until now, in addition robustness criteria are evaluated (gain margin, phase margin, strength gain, rejecting high frequency noise and sensitivity applied to the entire model (controller-plant, obtaining good results with a frequency range of 0.020 to 90 rad/s, which contributes to the robustness of the system.

  18. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  19. Environmental Noise, Genetic Diversity and the Evolution of Evolvability and Robustness in Model Gene Networks

    Science.gov (United States)

    Steiner, Christopher F.

    2012-01-01

    The ability of organisms to adapt and persist in the face of environmental change is accepted as a fundamental feature of natural systems. More contentious is whether the capacity of organisms to adapt (or “evolvability”) can itself evolve and the mechanisms underlying such responses. Using model gene networks, I provide evidence that evolvability emerges more readily when populations experience positively autocorrelated environmental noise (red noise) compared to populations in stable or randomly varying (white noise) environments. Evolvability was correlated with increasing genetic robustness to effects on network viability and decreasing robustness to effects on phenotypic expression; populations whose networks displayed greater viability robustness and lower phenotypic robustness produced more additive genetic variation and adapted more rapidly in novel environments. Patterns of selection for robustness varied antagonistically with epistatic effects of mutations on viability and phenotypic expression, suggesting that trade-offs between these properties may constrain their evolutionary responses. Evolution of evolvability and robustness was stronger in sexual populations compared to asexual populations indicating that enhanced genetic variation under fluctuating selection combined with recombination load is a primary driver of the emergence of evolvability. These results provide insight into the mechanisms potentially underlying rapid adaptation as well as the environmental conditions that drive the evolution of genetic interactions. PMID:23284934

  20. Proscription supports robust perceptual integration by suppression in human visual cortex.

    Science.gov (United States)

    Rideaux, Reuben; Welchman, Andrew E

    2018-04-17

    Perception relies on integrating information within and between the senses, but how does the brain decide which pieces of information should be integrated and which kept separate? Here we demonstrate how proscription can be used to solve this problem: certain neurons respond best to unrealistic combinations of features to provide 'what not' information that drives suppression of unlikely perceptual interpretations. First, we present a model that captures both improved perception when signals are consistent (and thus should be integrated) and robust estimation when signals are conflicting. Second, we test for signatures of proscription in the human brain. We show that concentrations of inhibitory neurotransmitter GABA in a brain region intricately involved in integrating cues (V3B/KO) correlate with robust integration. Finally, we show that perturbing excitation/inhibition impairs integration. These results highlight the role of proscription in robust perception and demonstrate the functional purpose of 'what not' sensors in supporting sensory estimation.

  1. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  2. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  3. Robust velocity and load control of a steam turbine in a combined cycle thermoelectric power station; Control robusto de velocidad y carga de una turbina de vapor en una central termoelectrica de ciclo combinado

    Energy Technology Data Exchange (ETDEWEB)

    Reyes Archundia, Enrique

    1998-12-31

    This research work is oriented to design, develop and validate an algorithm of modern control, that allows obtaining better performances in the control of speed of a steam turbine pertaining to a combined cycle thermoelectric power station, in all the operation interval, as well as obtaining better performances in the control of the amount of generated megawatts by the same when it is connected to an electric power generator, comparing the performance with the one obtained by means of the existing conventional controller. The changes in the speed reference or load are at the request of the operator and they always occur in incline form, indicating the rapidity with which it is desired to carry out the change of value in the reference. This is the reason for why the main objective of the control to design is to make a good follow up to references of the incline type. In the subsystem of the existing steam turbine the disadvantage is that the valves that regulate the steam flow to the turbine present a connection with the bypass valve that allows to derive the steam flow towards the main condenser without going through the turbine. It is for this reason that a multi-variable control that contemplates the interaction that occurs among the valves just mentioned, departing from a single variable design. The robust control H{infinity} has the following characteristics that allow it to be applied to the steam turbine process: the design can be made to have two poles in the origin, with which a good follow up to references of incline type is obtained; it allows the uncertainty handling, with which good results in everything are expected an entire operation interval; and it allows the design of multi-variable controllers, with which the existing interaction between the valves of control and bypass is considered. It is very difficult to be able to make tests with the real process, due to the cost and risks that it implies, nevertheless, the developments achieved in the areas

  4. Gap features of layered iron-selenium-tellurium compound below and above the superconducting transition temperature by break-junction spectroscopy combined with STS

    Science.gov (United States)

    Ekino, T.; Sugimoto, A.; Gabovich, A. M.

    2018-05-01

    We studied correlations between the superconducting gap features of Te-substituted FeSe observed by scanning tunnelling spectroscopy (STS) and break-junction tunnelling spectroscopy (BJTS). At bias voltages outside the superconducting gap-energy range, the broad gap structure exists, which becomes the normal-state gap above the critical temperature, T c. Such behaviour is consistent with the model of the partially gapped density-wave superconductor involving both superconducting gaps and pseudogaps, which has been applied by us earlier to high-Tc cuprates. The similarity suggests that the parent electronic spectrum features should have much in common for these classes of materials.

  5. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  6. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  7. How do preferential flow features connect? Combining tracers and excavation to examine hillslope flow pathways on Vancouver Island, British Columbia, Canada.

    Science.gov (United States)

    Anderson, A. E.; Weiler, M.

    2005-12-01

    Preferential flow is a complex process that influences water flow and solute transport in soils at different scales. Many studies have advanced our understanding about the physical structures of preferential pathways and their effects on water flow and solute transport at the column and plot scale. However, we still know very little about how preferential flow features connect over large distances and how they influence water flow and solute transport at the hillslope and catchment scale. Working in a forested watershed on northeast Vancouver Island in British Columbia, Canada, we conducted several artificial tracer experiments under natural and steady state flow conditions to investigate how water and solutes move through a hillslope section above a road cutbank. After these ``black-box'' tracer experiments we applied a blue food dye and excavated the hillslope to visualize the stained flow pathways. Under natural conditions two of the largest preferential features transmitted water at rates up to 30 liters/min. When a NaCl tracer was applied 12 m upslope of the road cutbank one soil pipe transmitted 97% of the recovered tracer during two large storms. When tracer was applied 30 m upslope of the road a more diffused response was observed. For the steady-state conditions we pumped water into trenches excavated at 12 m and 30 m above the road and then applied NaCl during constant outflow. Pumping water into the 12 m trench produced flow from only two preferential features, but a response in all preferential features was observed when water was pumped into the 30 m trench. The detailed excavations showed that the largest preferential feature was connected to the lower trench by large soil pipes at the interface of the organic and mineral soil horizons that were connected by flow through the organic soil. Other cross sections between 12 and 30 m upslope revealed concentrated flow through coarse mineral soil, diffused flow through mineral and organic soil, flow along

  8. Automated detection of microaneurysms using robust blob descriptors

    Science.gov (United States)

    Adal, K.; Ali, S.; Sidibé, D.; Karnowski, T.; Chaum, E.; Mériaudeau, F.

    2013-03-01

    Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.

  9. Framework for Robustness Assessment of Timber Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for the design and analysis of robustness of timber structures. This is actualized by a more4 frequent use of advanced types of timber structures with limited redundancy and serious consequences in the case of failure. Combined with increased requirements...... to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential. Further, the collapse of the Ballerup Super Arena, the bad Reichenhall Ice-Arena and a number of other structural systems during the last 10 years has...... increased the interest in robustness. Typically, modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although the importance of robustness for structural design is widely recognized, the code requirements...

  10. Robust surgery loading

    NARCIS (Netherlands)

    Hans, Elias W.; Wullink, Gerhard; van Houdenhoven, Mark; Kazemier, Geert

    2008-01-01

    We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This

  11. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  12. Morphological self-organizing feature map neural network with applications to automatic target recognition

    Science.gov (United States)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  13. Object detection based on improved color and scale invariant features

    Science.gov (United States)

    Chen, Mengyang; Men, Aidong; Fan, Peng; Yang, Bo

    2009-10-01

    A novel object detection method which combines color and scale invariant features is presented in this paper. The detection system mainly adopts the widely used framework of SIFT (Scale Invariant Feature Transform), which consists of both a keypoint detector and descriptor. Although SIFT has some impressive advantages, it is not only computationally expensive, but also vulnerable to color images. To overcome these drawbacks, we employ the local color kernel histograms and Haar Wavelet Responses to enhance the descriptor's distinctiveness and computational efficiency. Extensive experimental evaluations show that the method has better robustness and lower computation costs.

  14. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  15. Scale-adaptive Local Patches for Robust Visual Object Tracking

    Directory of Open Access Journals (Sweden)

    Kang Sun

    2014-04-01

    Full Text Available This paper discusses the problem of robustly tracking objects which undergo rapid and dramatic scale changes. To remove the weakness of global appearance models, we present a novel scheme that combines object’s global and local appearance features. The local feature is a set of local patches that geometrically constrain the changes in the target’s appearance. In order to adapt to the object’s geometric deformation, the local patches could be removed and added online. The addition of these patches is constrained by the global features such as color, texture and motion. The global visual features are updated via the stable local patches during tracking. To deal with scale changes, we adapt the scale of patches in addition to adapting the object bound box. We evaluate our method by comparing it to several state-of-the-art trackers on publicly available datasets. The experimental results on challenging sequences confirm that, by using this scale-adaptive local patches and global properties, our tracker outperforms the related trackers in many cases by having smaller failure rate as well as better accuracy.

  16. Robust-mode analysis of hydrodynamic flows

    Science.gov (United States)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  17. Significance of Joint Features Derived from the Modified Group Delay Function in Speech Processing

    Directory of Open Access Journals (Sweden)

    Murthy Hema A

    2007-01-01

    Full Text Available This paper investigates the significance of combining cepstral features derived from the modified group delay function and from the short-time spectral magnitude like the MFCC. The conventional group delay function fails to capture the resonant structure and the dynamic range of the speech spectrum primarily due to pitch periodicity effects. The group delay function is modified to suppress these spikes and to restore the dynamic range of the speech spectrum. Cepstral features are derived from the modified group delay function, which are called the modified group delay feature (MODGDF. The complementarity and robustness of the MODGDF when compared to the MFCC are also analyzed using spectral reconstruction techniques. Combination of several spectral magnitude-based features and the MODGDF using feature fusion and likelihood combination is described. These features are then used for three speech processing tasks, namely, syllable, speaker, and language recognition. Results indicate that combining MODGDF with MFCC at the feature level gives significant improvements for speech recognition tasks in noise. Combining the MODGDF and the spectral magnitude-based features gives a significant increase in recognition performance of 11% at best, while combining any two features derived from the spectral magnitude does not give any significant improvement.

  18. Robust Statistics and Regularization for Feature Extraction and UXO Discrimination

    Science.gov (United States)

    2011-07-01

    behind our inversion routines. Throughout this report we use the dipole model (Bell (2005), Pasion (2007)) to predict observed TEM data. The...data acquired over axisymmetric targets ( Pasion , 2007). Of course, the two-dipole model may not provide a good fit to data acquired over an non...in order to characterize the distributions of TOI polarizabilities. More description of this procedure is given in Pasion et al. (2011). Figure 28

  19. A combined MRI and MRSI based multiclass system for brain tumour recognition using LS-SVMs with class probabilities and feature selection.

    NARCIS (Netherlands)

    Luts, J.; Heerschap, A.; Suykens, J.A.; Huffel, S. van

    2007-01-01

    OBJECTIVE: This study investigates the use of automated pattern recognition methods on magnetic resonance data with the ultimate goal to assist clinicians in the diagnosis of brain tumours. Recently, the combined use of magnetic resonance imaging (MRI) and magnetic resonance spectroscopic imaging

  20. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  1. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  2. Prognostic impact of demographic factors and clinical features on the mode of death in high-risk patients after myocardial infarction--a combined analysis from multicenter trials

    DEFF Research Database (Denmark)

    Yap, Yee Guan; Duong, Trinh; Bland, J Martin

    2005-01-01

    mortality, whereas diabetes was only predictive of all-cause mortality. Smoking habit and atrial fibrillation had no prognostic value. Similar parameters were also predictive of short-term mortality, but not identical. CONCLUSIONS: Our study has shown that in high-risk patients post MI, who have been...... preselected using LVEF or frequent ventricular premature beats, demographic and clinical features are powerful predictors of mortality in the thrombolytic era. We propose that demographic and clinical factors should be considered when designing risk stratification or survival studies, or when identifying high...

  3. Robust online face tracking-by-detection

    NARCIS (Netherlands)

    Comaschi, F.; Stuijk, S.; Basten, T.; Corporaal, H.

    2016-01-01

    The problem of online face tracking from unconstrained videos is still unresolved. Challenges range from coping with severe online appearance variations to coping with occlusion. We propose RFTD (Robust Face Tracking-by-Detection), a system which combines tracking and detection into a single

  4. Understanding Legacy Features with Featureous

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Java programs called Featureous that addresses this issue. Featureous allows a programmer to easily establish feature-code traceability links and to analyze their characteristics using a number of visualizations. Featureous is an extension to the NetBeans IDE, and can itself be extended by third...

  5. A Novel Algorithm for Determining the Contextual Characteristics of Movement Behaviors by Combining Accelerometer Features and Wireless Beacons: Development and Implementation.

    Science.gov (United States)

    Magistro, Daniele; Sessa, Salvatore; Kingsnorth, Andrew P; Loveday, Adam; Simeone, Alessandro; Zecca, Massimiliano; Esliger, Dale W

    2018-04-20

    Unfortunately, global efforts to promote "how much" physical activity people should be undertaking have been largely unsuccessful. Given the difficulty of achieving a sustained lifestyle behavior change, many scientists are reexamining their approaches. One such approach is to focus on understanding the context of the lifestyle behavior (ie, where, when, and with whom) with a view to identifying promising intervention targets. The aim of this study was to develop and implement an innovative algorithm to determine "where" physical activity occurs using proximity sensors coupled with a widely used physical activity monitor. A total of 19 Bluetooth beacons were placed in fixed locations within a multilevel, mixed-use building. In addition, 4 receiver-mode sensors were fitted to the wrists of a roving technician who moved throughout the building. The experiment was divided into 4 trials with different walking speeds and dwelling times. The data were analyzed using an original and innovative algorithm based on graph generation and Bayesian filters. Linear regression models revealed significant correlations between beacon-derived location and ground-truth tracking time, with intraclass correlations suggesting a high goodness of fit (R 2 =.9780). The algorithm reliably predicted indoor location, and the robustness of the algorithm improved with a longer dwelling time (>100 s; error location of an individual within an indoor environment. This novel implementation of "context sensing" will facilitate a wealth of new research questions on promoting healthy behavior change, the optimization of patient care, and efficient health care planning (eg, patient-clinician flow, patient-clinician interaction). ©Daniele Magistro, Salvatore Sessa, Andrew P Kingsnorth, Adam Loveday, Alessandro Simeone, Massimiliano Zecca, Dale W Esliger. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.04.2018.

  6. Refined estimates of local recurrence risks by DCIS score adjusting for clinicopathological features: a combined analysis of ECOG-ACRIN E5194 and Ontario DCIS cohort studies.

    Science.gov (United States)

    Rakovitch, E; Gray, R; Baehner, F L; Sutradhar, R; Crager, M; Gu, S; Nofech-Mozes, S; Badve, S S; Hanna, W; Hughes, L L; Wood, W C; Davidson, N E; Paszat, L; Shak, S; Sparano, J A; Solin, L J

    2018-06-01

    Better tools are needed to estimate local recurrence (LR) risk after breast-conserving surgery (BCS) for DCIS. The DCIS score (DS) was validated as a predictor of LR in E5194 and Ontario DCIS cohort (ODC) after BCS. We combined data from E5194 and ODC adjusting for clinicopathological factors to provide refined estimates of the 10-year risk of LR after treatment by BCS alone. Data from E5194 and ODC were combined. Patients with positive margins or multifocality were excluded. Identical Cox regression models were fit for each study. Patient-specific meta-analysis was used to calculate precision-weighted estimates of 10-year LR risk by DS, age, tumor size and year of diagnosis. The combined cohort includes 773 patients. The DS and age at diagnosis, tumor size and year of diagnosis provided independent prognostic information on the 10-year LR risk (p ≤ 0.009). Hazard ratios from E5194 and ODC cohorts were similar for the DS (2.48, 1.95 per 50 units), tumor size ≤ 1 versus  > 1-2.5 cm (1.45, 1.47), age ≥ 50 versus  15%) 10-year LR risk after BCS alone compared to utilization of DS alone or clinicopathological factors alone. The combined analysis provides refined estimates of 10-year LR risk after BCS for DCIS. Adding information on tumor size and age at diagnosis to the DS adjusting for year of diagnosis provides improved LR risk estimates to guide treatment decision making.

  7. On the robustness of Herlihy's hierarchy

    Science.gov (United States)

    Jayanti, Prasad

    1993-01-01

    A wait-free hierarchy maps object types to levels in Z(+) U (infinity) and has the following property: if a type T is at level N, and T' is an arbitrary type, then there is a wait-free implementation of an object of type T', for N processes, using only registers and objects of type T. The infinite hierarchy defined by Herlihy is an example of a wait-free hierarchy. A wait-free hierarchy is robust if it has the following property: if T is at level N, and S is a finite set of types belonging to levels N - 1 or lower, then there is no wait-free implementation of an object of type T, for N processes, using any number and any combination of objects belonging to the types in S. Robustness implies that there are no clever ways of combining weak shared objects to obtain stronger ones. Contrary to what many researchers believe, we prove that Herlihy's hierarchy is not robust. We then define some natural variants of Herlihy's hierarchy, which are also infinite wait-free hierarchies. With the exception of one, which is still open, these are not robust either. We conclude with the open question of whether non-trivial robust wait-free hierarchies exist.

  8. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  9. Features of the mental status of patients with community-acquired pneumonia, combined with a chronic pathology of the hepatobiliary system of non-viral genesis

    Directory of Open Access Journals (Sweden)

    Razumnyi R.V.

    2017-04-01

    Full Text Available The aim was to study the characteristics of mental status of patients with community-acquired pneumonia (CAP, combined with a chronic pathology of the hepatobiliary system of non-viral genesis. We observed 165 patients with CAP in the age of 25-57 years. All patients were divided into two representative groups: I group (68 patients – CAP was comorbid with hepatic steatosis (HS, II group (96 patients with absence of chronic liver disease. To evaluate the psychological profile of patients’ personality we used a standardized multivariate method of personality research and to evaluate the level of anxiety and depression the scale of Spielberger-Hanin test and Beck’s questionnaire were used. Results of the study revealed that 66.2% of patients with CAP, comorbid with HS, declared the formation of psycho-emotional disorders in the form of neurotic reactions to the disease with prevalence of hypochondria, depression, hysterical manifestations with high psychasthenia, trait anxiety and somatic reactions with prevalence of anxiety and emotional instability, lots of somatic complaints, fixation on their own condition with formation of distinctive thinking mode and behavior by the type of "flight into disease". After completion of the standard treatment of patients with CAP, combined with HS, 42,7% of patients had psycho-emotional disorders, which were moderately expressed. Thus, in the complex of treatment and rehabilitative measures in patients with CAP, combined with HS some characteristics of the psychological profile, in the form of psycho-neurotic reaction to the disease should be considered to take optimal corrective actions.

  10. Feature Article

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education. Feature Article. Articles in Resonance – Journal of Science Education. Volume 1 Issue 1 January 1996 pp 80-85 Feature Article. What's New in Computers Windows 95 · Vijnan Shastri · More Details Fulltext PDF. Volume 1 Issue 1 January 1996 pp 86-89 Feature ...

  11. Interlobate esker architecture and related hydrogeological features derived from a combination of high-resolution reflection seismics and refraction tomography, Virttaankangas, southwest Finland

    Science.gov (United States)

    Maries, Georgiana; Ahokangas, Elina; Mäkinen, Joni; Pasanen, Antti; Malehmir, Alireza

    2017-05-01

    A novel high-resolution (2-4 m source and receiver spacing) reflection and refraction seismic survey was carried out for aquifer characterization and to confirm the existing depositional model of the interlobate esker of Virttaankangas, which is part of the Säkylänharju-Virttaankangas glaciofluvial esker-chain complex in southwest Finland. The interlobate esker complex hosting the managed aquifer recharge (MAR) plant is the source of the entire water supply for the city of Turku and its surrounding municipalities. An accurate delineation of the aquifer is therefore critical for long-term MAR planning and sustainable use of the esker resources. Moreover, an additional target was to resolve the poorly known stratigraphy of the 70-100-m-thick glacial deposits overlying a zone of fractured bedrock. Bedrock surface as well as fracture zones were confirmed through combined reflection seismic and refraction tomography results and further validated against existing borehole information. The high-resolution seismic data proved successful in accurately delineating the esker cores and revealing complex stratigraphy from fan lobes to kettle holes, providing valuable information for potential new pumping wells. This study illustrates the potential of geophysical methods for fast and cost-effective esker studies, in particular the digital-based landstreamer and its combination with geophone-based wireless recorders, where the cover sediments are reasonably thick.

  12. Combination of support vector machine, artificial neural network and random forest for improving the classification of convective and stratiform rain using spectral features of SEVIRI data

    Science.gov (United States)

    Lazri, Mourad; Ameur, Soltane

    2018-05-01

    A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.

  13. Combining Boosted Global

    Directory of Open Access Journals (Sweden)

    Szidónia Lefkovits

    2011-06-01

    Full Text Available The domain of object detection presents a wide range of interest due to its numerous application possibilities especially real time applications. All of them require high detection rate correlated with short processing time. One of the most efficient systems, working with visual information, were presented in the publication of Viola et al. [1], [2].This detection system uses classifiers based on Haar-like separating features combined with the AdaBoost learning algorithm. The most important bottleneck of the system is the big number of false detections at high hit rate. In this paper we propose to overcome this disadvantage by using specialized parts classifiers. This aim comes from the observation that the target object does not resemble the false detections at all.The reason of this fact is the coding manner of Haar-like features which attend to handle image patches and neglect the edges and contours. In order to obtain a more robust classifier, a global aspect method is combined with a part-based method, having the goal to improve the performance of the detector without significant increase of the detection time.

  14. Supramolecular features of 2-(chlorophenyl)-3-[(chlorobenzylidene)-amino]-2,3-dihydroquinazolin-4(1H)-ones: A combined experimental and computational study

    Science.gov (United States)

    Mandal, Arkalekha; Patel, Bhisma K.

    2018-03-01

    The molecular structures of two isomeric 2-(chlorophenyl)-3-[(chlorobenzylidene)-amino] substituted 2,3-dihydroquinazolin-4(1H)-ones have been determined via single crystal XRD. Both isomers contain chloro substitutions on each of the phenyl rings and as a result a broad spectrum of halogen mediated weak interactions are viable in their crystal structures. The crystal packing of these compounds is stabilized by strong N-H⋯O hydrogen bond and various weak, non-classical hydrogen bonds acting synergistically. Both the molecules contain a chiral center and the weak interactions observed in them are either chiral self-discriminatory or chiral self-recognizing in nature. The weak interactions and spectral features of the compounds have been studied through experimental as well as computational methods including DFT, MEP, NBO and Hiresfeld surface analyses. In addition, the effect of different weak interactions to dictate either chiral self-recognition or self-discrimination in crystal packing has been elucidated.

  15. Robust estimation of seismic coda shape

    Science.gov (United States)

    Nikkilä, Mikko; Polishchuk, Valentin; Krasnoshchekov, Dmitry

    2014-04-01

    We present a new method for estimation of seismic coda shape. It falls into the same class of methods as non-parametric shape reconstruction with the use of neural network techniques where data are split into a training and validation data sets. We particularly pursue the well-known problem of image reconstruction formulated in this case as shape isolation in the presence of a broadly defined noise. This combined approach is enabled by the intrinsic feature of seismogram which can be divided objectively into a pre-signal seismic noise with lack of the target shape, and the remainder that contains scattered waveforms compounding the coda shape. In short, we separately apply shape restoration procedure to pre-signal seismic noise and the event record, which provides successful delineation of the coda shape in the form of a smooth almost non-oscillating function of time. The new algorithm uses a recently developed generalization of classical computational-geometry tool of α-shape. The generalization essentially yields robust shape estimation by ignoring locally a number of points treated as extreme values, noise or non-relevant data. Our algorithm is conceptually simple and enables the desired or pre-determined level of shape detail, constrainable by an arbitrary data fit criteria. The proposed tool for coda shape delineation provides an alternative to moving averaging and/or other smoothing techniques frequently used for this purpose. The new algorithm is illustrated with an application to the problem of estimating the coda duration after a local event. The obtained relation coefficient between coda duration and epicentral distance is consistent with the earlier findings in the region of interest.

  16. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  17. Male genital dermatophytosis - clinical features and the effects of the misuse of topical steroids and steroid combinations - an alarming problem in India.

    Science.gov (United States)

    Verma, Shyam B; Vasani, Resham

    2016-10-01

    Genital dermatophytosis has been considered rare by most Western authorities. However, to the contrary, Indian reports have shown a higher prevalence of genital dermatophytosis due to warm and humid climate, overcrowding and lack of hygiene. A review is presented for 24 cases of male genital dermatophytosis occurring in patients suffering from tinea cruris in India who have been randomly applying various broad-spectrum steroid antifungal and antibacterial creams containing one or more antifungal and antibiotic in addition to potent corticosteroids, mainly clobetasol propionate. This is such a common phenomenon that Indian dermatologists are witnessing an epidemic of sorts of steroid-modified dermatophytosis and we hereby share various clinical presentations of dermatophytosis of penis and/or scrotum in patients with tinea cruris who have been applying the above-mentioned creams. The review also discusses the bleak scenario that prevails in India regarding the drug regulatory affairs that allow such dangerous and irrational combinations that are sold over the counter because of misinterpretation of the law and lax implementation of existing laws. © 2016 Blackwell Verlag GmbH.

  18. Unveiling the chemical and morphological features of Sb:SnO2 nanocrystals by the combined use of HRTEM and Ab Initio surface energy calculations

    International Nuclear Information System (INIS)

    Stroppa, Daniel G.; Montoro, Luciano A.; Ramirez, Antonio J.; Beltran, Armando; Andres, Juan; Conti, Tiago G.; Silva, Rafael O. da; Longo, Elson; Leite, Edson R.

    2009-01-01

    Modeling of nanocrystals supported by advanced morphological and chemical characterization is a unique tool for the development of reliable nanostructured devices, which depends on the ability to synthesize and characterize material on the atomic scale. Among the most significant challenges in nanostructural characterization is the evaluation of crystal growth mechanisms and their dependence on the shape of nanoparticles and the distribution of doping elements. This work presents a new strategy to characterize nanocrystals, applied here to antimony-doped tin oxide (Sb-SnO 2 ) (ATO) by the combined use of experimental and simulated high-resolution transmission electron microscopy (HRTEM) images and surface energy ab initio calculations. The results show that the Wulff construction can not only describe the shape of nanocrystals as a function of surface energy distribution but also retrieve quantitative information on dopant distribution by the dimensional analysis of nanoparticle shapes. In addition, a novel three-dimensional evaluation of an oriented attachment growth mechanism is provided in the proposed methodology. This procedure is a useful approach for faceted nanocrystal shape modeling and indirect quantitative evaluation of dopant spatial distribution, which are difficult to evaluate by other techniques. (author)

  19. Effect of different temperature-time combinations on physicochemical, microbiological, textural and structural features of sous-vide cooked lamb loins.

    Science.gov (United States)

    Roldán, Mar; Antequera, Teresa; Martín, Alberto; Mayoral, Ana Isabel; Ruiz, Jorge

    2013-03-01

    Lamb loins were subjected to sous-vide cooking at different combinations of temperature (60, 70, and 80 °C) and time (6, 12, and 24 h). Different physicochemical, histological and structural parameters were studied. Increasing cooking temperatures led to higher weight losses and lower moisture contents, whereas the effect of cooking time on these variables was limited. Samples cooked at 60 °C showed the highest lightness and redness, while increasing cooking temperature and cooking time produced higher yellowness values. Most textural variables in a texture profile analysis showed a marked interaction between cooking temperature and time. Samples cooked for 24h showed significantly lower values for most of the studied textural parameters for all the temperatures considered. Connective tissue granulation at 60 °C and gelation at 70 °C were observed in the SEM micrographs. The sous-vide cooking of lamb loins dramatically reduced microbial population even with the less intense heat treatment studied (60 °C-6 h). Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. a comparative study of some robust ridge and liu estimators

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S. Estimators, are preferred. To handle these two problems jointly, the study combines the Ridge and Liu Estimators with Robust.

  1. GFC-Robust Risk Management Strategies under the Basel Accord

    NARCIS (Netherlands)

    M.J. McAleer (Michael); J.A. Jiménez-Martín (Juan-Ángel); T. Pérez-Amaral (Teodosio)

    2010-01-01

    textabstractA risk management strategy is proposed as being robust to the Global Financial Crisis (GFC) by selecting a Value-at-Risk (VaR) forecast that combines the forecasts of different VaR models. The robust forecast is based on the median of the point VaR forecasts of a set of conditional

  2. Robust Forecasting of Non-Stationary Time Series

    NARCIS (Netherlands)

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable

  3. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  4. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  5. Passion, Robustness and Perseverance

    DEFF Research Database (Denmark)

    Lim, Miguel Antonio; Lund, Rebecca

    2016-01-01

    Evaluation and merit in the measured university are increasingly based on taken-for-granted assumptions about the “ideal academic”. We suggest that the scholar now needs to show that she is passionate about her work and that she gains pleasure from pursuing her craft. We suggest that passion...... and pleasure achieve an exalted status as something compulsory. The scholar ought to feel passionate about her work and signal that she takes pleasure also in the difficult moments. Passion has become a signal of robustness and perseverance in a job market characterised by funding shortages, increased pressure...... way to demonstrate their potential and, crucially, their passion for their work. Drawing on the literature on technologies of governance, we reflect on what is captured and what is left out by these two evaluation instruments. We suggest that bibliometric analysis at the individual level is deeply...

  6. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  7. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  8. Robust snapshot interferometric spectropolarimetry.

    Science.gov (United States)

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy.

  9. Robust and Reusable Fuzzy Extractors

    Science.gov (United States)

    Boyen, Xavier

    The use of biometric features as key material in security protocols has often been suggested to relieve their owner from the need to remember long cryptographic secrets. The appeal of biometric data as cryptographic secrets stems from their high apparent entropy, their availability to their owner, and their relative immunity to loss. In particular, they constitute a very effective basis for user authentication, especially when combined with complementary credentials such as a short memorized password or a physical token. However, the use of biometrics in cryptography does not come without problems. Some difficulties are technical, such as the lack of uniformity and the imperfect reproducibility of biometrics, but some challenges are more fundamental.

  10. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  11. Dry syngas purification process for coal gas produced in oxy-fuel type integrated gasification combined cycle power generation with carbon dioxide capturing feature.

    Science.gov (United States)

    Kobayashi, Makoto; Akiho, Hiroyuki

    2017-12-01

    Electricity production from coal fuel with minimizing efficiency penalty for the carbon dioxide abatement will bring us sustainable and compatible energy utilization. One of the promising options is oxy-fuel type Integrated Gasification Combined Cycle (oxy-fuel IGCC) power generation that is estimated to achieve thermal efficiency of 44% at lower heating value (LHV) base and provide compressed carbon dioxide (CO 2 ) with concentration of 93 vol%. The proper operation of the plant is established by introducing dry syngas cleaning processes to control halide and sulfur compounds satisfying tolerate contaminants level of gas turbine. To realize the dry process, the bench scale test facility was planned to demonstrate the first-ever halide and sulfur removal with fixed bed reactor using actual syngas from O 2 -CO 2 blown gasifier for the oxy-fuel IGCC power generation. Design parameter for the test facility was required for the candidate sorbents for halide removal and sulfur removal. Breakthrough test was performed on two kinds of halide sorbents at accelerated condition and on honeycomb desulfurization sorbent at varied space velocity condition. The results for the both sorbents for halide and sulfur exhibited sufficient removal within the satisfactory short depth of sorbent bed, as well as superior bed conversion of the impurity removal reaction. These performance evaluation of the candidate sorbents of halide and sulfur removal provided rational and affordable design parameters for the bench scale test facility to demonstrate the dry syngas cleaning process for oxy-fuel IGCC system as the scaled up step of process development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2015-01-01

    Full Text Available Very high resolution (VHR image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened, while they ignore the change pattern description (i.e., how the changes changed, which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique.

  13. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  14. Identification of the Structural Features of Guanine Derivatives as MGMT Inhibitors Using 3D-QSAR Modeling Combined with Molecular Docking

    Directory of Open Access Journals (Sweden)

    Guohui Sun

    2016-06-01

    Full Text Available DNA repair enzyme O6-methylguanine-DNA methyltransferase (MGMT, which plays an important role in inducing drug resistance against alkylating agents that modify the O6 position of guanine in DNA, is an attractive target for anti-tumor chemotherapy. A series of MGMT inhibitors have been synthesized over the past decades to improve the chemotherapeutic effects of O6-alkylating agents. In the present study, we performed a three-dimensional quantitative structure activity relationship (3D-QSAR study on 97 guanine derivatives as MGMT inhibitors using comparative molecular field analysis (CoMFA and comparative molecular similarity indices analysis (CoMSIA methods. Three different alignment methods (ligand-based, DFT optimization-based and docking-based alignment were employed to develop reliable 3D-QSAR models. Statistical parameters derived from the models using the above three alignment methods showed that the ligand-based CoMFA (Qcv2 = 0.672 and Rncv2 = 0.997 and CoMSIA (Qcv2 = 0.703 and Rncv2 = 0.946 models were better than the other two alignment methods-based CoMFA and CoMSIA models. The two ligand-based models were further confirmed by an external test-set validation and a Y-randomization examination. The ligand-based CoMFA model (Qext2 = 0.691, Rpred2 = 0.738 and slope k = 0.91 was observed with acceptable external test-set validation values rather than the CoMSIA model (Qext2 = 0.307, Rpred2 = 0.4 and slope k = 0.719. Docking studies were carried out to predict the binding modes of the inhibitors with MGMT. The results indicated that the obtained binding interactions were consistent with the 3D contour maps. Overall, the combined results of the 3D-QSAR and the docking obtained in this study provide an insight into the understanding of the interactions between guanine derivatives and MGMT protein, which will assist in designing novel MGMT inhibitors with desired activity.

  15. Interactive music composition driven by feature evolution.

    Science.gov (United States)

    Kaliakatsos-Papakostas, Maximos A; Floros, Andreas; Vrahatis, Michael N

    2016-01-01

    Evolutionary music composition is a prominent technique for automatic music generation. The immense adaptation potential of evolutionary algorithms has allowed the realisation of systems that automatically produce music through feature and interactive-based composition approaches. Feature-based composition employs qualitatively descriptive music features as fitness landmarks. Interactive composition systems on the other hand, derive fitness directly from human ratings and/or selection. The paper at hand introduces a methodological framework that combines the merits of both evolutionary composition methodologies. To this end, a system is presented that is organised in two levels: the higher level of interaction and the lower level of composition. The higher level incorporates the particle swarm optimisation algorithm, along with a proposed variant and evolves musical features according to user ratings. The lower level realizes feature-based music composition with a genetic algorithm, according to the top level features. The aim of this work is not to validate the efficiency of the currently utilised setup in each level, but to examine the convergence behaviour of such a two-level technique in an objective manner. Therefore, an additional novelty in this work concerns the utilisation of artificial raters that guide the system through the space of musical features, allowing the exploration of its convergence characteristics: does the system converge to optimal melodies, is this convergence fast enough for potential human listeners and is the trajectory to convergence "interesting' and "creative" enough? The experimental results reveal that the proposed methodological framework represents a fruitful and robust, novel approach to interactive music composition.

  16. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla

    2015-08-12

    This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.

  17. Robust photometric stereo using structural light sources

    Science.gov (United States)

    Han, Tian-Qi; Cheng, Yue; Shen, Hui-Liang; Du, Xin

    2014-05-01

    We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.

  18. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  19. Sparse alignment for robust tensor learning.

    Science.gov (United States)

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  20. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  1. Solar Features

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of solar feature datasets contributed by a number of national and private solar observatories located worldwide.

  2. Site Features

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset consists of various site features from multiple Superfund sites in U.S. EPA Region 8. These data were acquired from multiple sources at different times...

  3. Robust filtering for uncertain systems a parameter-dependent approach

    CERN Document Server

    Gao, Huijun

    2014-01-01

    This monograph provides the reader with a systematic treatment of robust filter design, a key issue in systems, control and signal processing, because of the fact that the inevitable presence of uncertainty in system and signal models often degrades the filtering performance and may even cause instability. The methods described are therefore not subject to the rigorous assumptions of traditional Kalman filtering. The monograph is concerned with robust filtering for various dynamical systems with parametric uncertainties, and focuses on parameter-dependent approaches to filter design. Classical filtering schemes, like H2 filtering and H¥ filtering, are addressed, and emerging issues such as robust filtering with constraints on communication channels and signal frequency characteristics are discussed. The text features: ·        design approaches to robust filters arranged according to varying complexity level, and emphasizing robust filtering in the parameter-dependent framework for the first time; ·...

  4. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  5. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    ved for the isolated English text, but for the handwritten Devanagari script it is not ... characters, lack of standard benchmarking and ground truth dataset, lack of ..... theory, proposed by Glen Shafer as a way to represent cognitive knowledge.

  6. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  7. Combining the Sterile Insect Technique with the Incompatible Insect Technique: III-Robust Mating Competitiveness of Irradiated Triple Wolbachia-Infected Aedes albopictus Males under Semi-Field Conditions.

    Science.gov (United States)

    Zhang, Dongjing; Lees, Rosemary Susan; Xi, Zhiyong; Bourtzis, Kostas; Gilles, Jeremie R L

    2016-01-01

    Combination of the sterile insect technique with the incompatible insect technique is considered to be a safe approach to control Aedes albopictus populations in the absence of an accurate and scalable sex separation system or genetic sexing strain. Our previous study has shown that the triple Wolbachia-infected Ae. albopictus strain (wAlbA, wAlbB and wPip) was suitable for mass rearing and females could be completely sterilized as pupae with a radiation dose of at least 28 Gy. However, whether this radiation dose can influence the mating competitiveness of the triple infected males was still unknown. In this study we aimed to evaluate the effects of irradiation on the male mating competitiveness of the triple infected strain under laboratory and semi-field conditions. The results herein indicate that irradiation with a lower, female-sterilizing dose has no negative impact on the longevity of triple infected males while a reduced lifespan was observed in the wild type males (wAlbA and wAlbB) irradiated with a higher male-sterilizing dose, in small cages. At different sterile: fertile release ratios in small cages, triple-infected males induced 39.8, 81.6 and 87.8% sterility in a wild type female population at 1:1, 5:1 and 10:1 release ratios, respectively, relative to a fertile control population. Similarly, irradiated triple infected males induced 31.3, 70.5 and 89.3% sterility at 1:1, 5:1 and 10:1 release ratios, respectively, again relative to the fertile control. Under semi-field conditions at a 5:1 release ratio, relative to wild type males, the mean male mating competitiveness index of 28 Gy irradiated triple-infected males was significantly higher than 35 Gy irradiated wild type males, while triple infected males showed no difference in mean mating competitiveness to either irradiated triple-infected or irradiated wild type males. An unexpected difference was also observed in the relative male mating competitiveness of the triple infected strain after

  8. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  9. Robustness of IPTV business models

    NARCIS (Netherlands)

    Bouwman, H.; Zhengjia, M.; Duin, P. van der; Limonard, S.

    2008-01-01

    The final stage in the STOF method is an evaluation of the robustness of the design, for which the method provides some guidelines. For many innovative services, the future holds numerous uncertainties, which makes evaluating the robustness of a business model a difficult task. In this chapter, we

  10. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2009-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure....

  11. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  12. A Robust Geometric Model for Argument Classification

    Science.gov (United States)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  13. Reconfigurable Robust Routing for Mobile Outreach Network

    Science.gov (United States)

    Lin, Ching-Fang

    2010-01-01

    The Reconfigurable Robust Routing for Mobile Outreach Network (R3MOO N) provides advanced communications networking technologies suitable for the lunar surface environment and applications. The R3MOON techn ology is based on a detailed concept of operations tailored for luna r surface networks, and includes intelligent routing algorithms and wireless mesh network implementation on AGNC's Coremicro Robots. The product's features include an integrated communication solution inco rporating energy efficiency and disruption-tolerance in a mobile ad h oc network, and a real-time control module to provide researchers an d engineers a convenient tool for reconfiguration, investigation, an d management.

  14. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  15. A robust automatic leukocyte recognition method based on island-clustering texture

    Directory of Open Access Journals (Sweden)

    Xiaoshun Li

    2016-01-01

    Full Text Available A leukocyte recognition method for human peripheral blood smear based on island-clustering texture (ICT is proposed. By analyzing the features of the five typical classes of leukocyte images, a new ICT model is established. Firstly, some feature points are extracted in a gray leukocyte image by mean-shift clustering to be the centers of islands. Secondly, the growing region is employed to create regions of the islands in which the seeds are just these feature points. These islands distribution can describe a new texture. Finally, a distinguished parameter vector of these islands is created as the ICT features by combining the ICT features with the geometric features of the leukocyte. Then the five typical classes of leukocytes can be recognized successfully at the correct recognition rate of more than 92.3% with a total sample of 1310 leukocytes. Experimental results show the feasibility of the proposed method. Further analysis reveals that the method is robust and results can provide important information for disease diagnosis.

  16. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  17. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  18. Robust online Hamiltonian learning

    International Nuclear Information System (INIS)

    Granade, Christopher E; Ferrie, Christopher; Wiebe, Nathan; Cory, D G

    2012-01-01

    In this work we combine two distinct machine learning methodologies, sequential Monte Carlo and Bayesian experimental design, and apply them to the problem of inferring the dynamical parameters of a quantum system. We design the algorithm with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online (during experimental data collection), avoiding the need for storage and post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. The algorithm also numerically estimates the Cramer–Rao lower bound, certifying its own performance. (paper)

  19. Construction of Individual Morphological Brain Networks with Multiple Morphometric Features

    Directory of Open Access Journals (Sweden)

    Chunlan Yang

    2017-04-01

    Full Text Available In recent years, researchers have increased attentions to the morphological brain network, which is generally constructed by measuring the mathematical correlation across regions using a certain morphometric feature, such as regional cortical thickness and voxel intensity. However, cerebral structure can be characterized by various factors, such as regional volume, surface area, and curvature. Moreover, most of the morphological brain networks are population-based, which has limitations in the investigations of individual difference and clinical applications. Hence, we have extended previous studies by proposing a novel method for realizing the construction of an individual-based morphological brain network through a combination of multiple morphometric features. In particular, interregional connections are estimated using our newly introduced feature vectors, namely, the Pearson correlation coefficient of the concatenation of seven morphometric features. Experiments were performed on a healthy cohort of 55 subjects (24 males aged from 20 to 29 and 31 females aged from 20 to 28 each scanned twice, and reproducibility was evaluated through test–retest reliability. The robustness of morphometric features was measured firstly to select the more reproducible features to form the connectomes. Then the topological properties were analyzed and compared with previous reports of different modalities. Small-worldness was observed in all the subjects at the range of the entire network sparsity (20–40%, and configurations were comparable with previous findings at the sparsity of 23%. The spatial distributions of the hub were found to be significantly influenced by the individual variances, and the hubs obtained by averaging across subjects and sparsities showed correspondence with previous reports. The intraclass coefficient of graphic properties (clustering coefficient = 0.83, characteristic path length = 0.81, betweenness centrality = 0.78 indicates

  20. PHROG: A Multimodal Feature for Place Recognition

    Directory of Open Access Journals (Sweden)

    Fabien Bonardi

    2017-05-01

    Full Text Available Long-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR, Short Wavelength InfraRed (SWIR and Long Wavelength InfraRed (LWIR.

  1. Improving mass candidate detection in mammograms via feature maxima propagation and local feature selection.

    Science.gov (United States)

    Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico

    2014-08-01

    Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature

  2. Robust boosting via convex optimization

    Science.gov (United States)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  3. Palmprint Based Verification System Using SURF Features

    Science.gov (United States)

    Srinivas, Badrinath G.; Gupta, Phalguni

    This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.

  4. Effects of single-feature and dual-feature interference on interference control in children with combined type of attention deficit hyperactivity disorder%单一和双重干扰源对混合型注意缺陷多动障碍儿童干扰控制的影响

    Institute of Scientific and Technical Information of China (English)

    杨美玲; 杨双

    2012-01-01

    目的:探讨单一干扰源和双重干扰源对混合型注意缺陷多动障碍(ADHD)儿童的干扰控制的影响.方法:对25例符合美国精神障碍诊断与统计手册第4版( DSM-Ⅳ)诊断标准的门诊混合型ADHD儿童和25名性别、年龄和智力相匹配的对照组儿童,进行图形Stroop测验,通过操纵图片的颜色和形状特征,设置了单一干扰和双重干扰两种条件,比较两组被试的冲突效应量.结果:在单一干扰条件下,ADHD组的冲突效应量显著大于对照组(P<0.05),而在双重干扰条件下,两组的冲突效应量差异无统计学意义(P>0.05).结论:本研究提示,在单一干扰源条件下,混合型ADHD儿童表现出干扰控制缺陷,但在双重干扰源条件下,混合型ADHD儿童的干扰控制表现与正常儿童差异不显著.%Objective: To explore the effects of single-feature and dual-feature interference on interference control in children with the combined type of attention deficit hyperactivity disorder (ADHD-C). Methods: Twenty-five children with ADHD-C meeting the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) diagnostic criteria, and 25 normal control group children matched for age, gender and intelligence were tested with the figure judgment Stroop test They were asked to judge the directions of the images when presented to a pair of images with arrows. Through the control of the color and shape features of pictures, single-feature interference condition and dual-feature interference were set up and the correct rate and response time were analyzed. Results: ADHD group were deficient in single-feature interference condition compared with normal children [(0.15 ±0.03) vs. (0.01 ±0.03), P = 0.001]. No significant difference were observed between ADHD group and control group in dual-feature interference condition [(0.09 ±0.04) vs. (0.01 ±0.04), P =0.093]. Conclusion: The results suggest that children with ADHD-C show a deficit in

  5. Combination of Metabolomic and Proteomic Analysis Revealed Different Features among Lactobacillus delbrueckii Subspecies bulgaricus and lactis Strains While In Vivo Testing in the Model Organism Caenorhabditis elegans Highlighted Probiotic Properties

    Directory of Open Access Journals (Sweden)

    Elena Zanni

    2017-06-01

    Full Text Available Lactobacillus delbrueckii represents a technologically relevant member of lactic acid bacteria, since the two subspecies bulgaricus and lactis are widely associated with fermented dairy products. In the present work, we report the characterization of two commercial strains belonging to L. delbrueckii subspecies bulgaricus, lactis and a novel strain previously isolated from a traditional fermented fresh cheese. A phenomic approach was performed by combining metabolomic and proteomic analysis of the three strains, which were subsequently supplemented as food source to the model organism Caenorhabditis elegans, with the final aim to evaluate their possible probiotic effects. Restriction analysis of 16S ribosomal DNA revealed that the novel foodborne strain belonged to L. delbrueckii subspecies lactis. Proteomic and metabolomic approaches showed differences in folate, aminoacid and sugar metabolic pathways among the three strains. Moreover, evaluation of C. elegans lifespan, larval development, brood size, and bacterial colonization capacity demonstrated that L. delbrueckii subsp. bulgaricus diet exerted beneficial effects on nematodes. On the other hand, both L. delbrueckii subsp. lactis strains affected lifespan and larval development. We have characterized three strains belonging to L. delbrueckii subspecies bulgaricus and lactis highlighting their divergent origin. In particular, the two closely related isolates L. delbrueckii subspecies lactis display different galactose metabolic capabilities. Moreover, the L. delbrueckii subspecies bulgaricus strain demonstrated potential probiotic features. Combination of omic platforms coupled with in vivo screening in the simple model organism C. elegans is a powerful tool to characterize industrially relevant bacterial isolates.

  6. Combination of Metabolomic and Proteomic Analysis Revealed Different Features among Lactobacillus delbrueckii Subspecies bulgaricus and lactis Strains While In Vivo Testing in the Model Organism Caenorhabditis elegans Highlighted Probiotic Properties.

    Science.gov (United States)

    Zanni, Elena; Schifano, Emily; Motta, Sara; Sciubba, Fabio; Palleschi, Claudio; Mauri, Pierluigi; Perozzi, Giuditta; Uccelletti, Daniela; Devirgiliis, Chiara; Miccheli, Alfredo

    2017-01-01

    Lactobacillus delbrueckii represents a technologically relevant member of lactic acid bacteria, since the two subspecies bulgaricus and lactis are widely associated with fermented dairy products. In the present work, we report the characterization of two commercial strains belonging to L. delbrueckii subspecies bulgaricus , lactis and a novel strain previously isolated from a traditional fermented fresh cheese. A phenomic approach was performed by combining metabolomic and proteomic analysis of the three strains, which were subsequently supplemented as food source to the model organism Caenorhabditis elegans , with the final aim to evaluate their possible probiotic effects. Restriction analysis of 16S ribosomal DNA revealed that the novel foodborne strain belonged to L. delbrueckii subspecies lactis . Proteomic and metabolomic approaches showed differences in folate, aminoacid and sugar metabolic pathways among the three strains. Moreover, evaluation of C. elegans lifespan, larval development, brood size, and bacterial colonization capacity demonstrated that L. delbrueckii subsp. bulgaricus diet exerted beneficial effects on nematodes. On the other hand, both L. delbrueckii subsp. lactis strains affected lifespan and larval development. We have characterized three strains belonging to L. delbrueckii subspecies bulgaricus and lactis highlighting their divergent origin. In particular, the two closely related isolates L. delbrueckii subspecies lactis display different galactose metabolic capabilities. Moreover, the L. delbrueckii subspecies bulgaricus strain demonstrated potential probiotic features. Combination of omic platforms coupled with in vivo screening in the simple model organism C. elegans is a powerful tool to characterize industrially relevant bacterial isolates.

  7. Interlinked bistable mechanisms generate robust mitotic transitions.

    Science.gov (United States)

    Hutter, Lukas H; Rata, Scott; Hochegger, Helfrid; Novák, Béla

    2017-10-18

    The transitions between phases of the cell cycle have evolved to be robust and switch-like, which ensures temporal separation of DNA replication, sister chromatid separation, and cell division. Mathematical models describing the biochemical interaction networks of cell cycle regulators attribute these properties to underlying bistable switches, which inherently generate robust, switch-like, and irreversible transitions between states. We have recently presented new mathematical models for two control systems that regulate crucial transitions in the cell cycle: mitotic entry and exit, 1 and the mitotic checkpoint. 2 Each of the two control systems is characterized by two interlinked bistable switches. In the case of mitotic checkpoint control, these switches are mutually activating, whereas in the case of the mitotic entry/exit network, the switches are mutually inhibiting. In this Perspective we describe the qualitative features of these regulatory motifs and show that having two interlinked bistable mechanisms further enhances robustness and irreversibility. We speculate that these network motifs also underlie other cell cycle transitions and cellular transitions between distinct biochemical states.

  8. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  9. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...

  10. Robustness of airline route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  11. Multi-stream LSTM-HMM decoding and histogram equalization for noise robust keyword spotting.

    Science.gov (United States)

    Wöllmer, Martin; Marchi, Erik; Squartini, Stefano; Schuller, Björn

    2011-09-01

    Highly spontaneous, conversational, and potentially emotional and noisy speech is known to be a challenge for today's automatic speech recognition (ASR) systems, which highlights the need for advanced algorithms that improve speech features and models. Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components. In this article, we propose to combine histogram equalization and multi-condition training for robust keyword detection in noisy speech. To better cope with conversational speaking styles, we show how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network. The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".

  12. Robust electrocardiogram (ECG) beat classification using discrete wavelet transform

    International Nuclear Information System (INIS)

    Minhas, Fayyaz-ul-Amir Afsar; Arif, Muhammad

    2008-01-01

    This paper presents a robust technique for the classification of six types of heartbeats through an electrocardiogram (ECG). Features extracted from the QRS complex of the ECG using a wavelet transform along with the instantaneous RR-interval are used for beat classification. The wavelet transform utilized for feature extraction in this paper can also be employed for QRS delineation, leading to reduction in overall system complexity as no separate feature extraction stage would be required in the practical implementation of the system. Only 11 features are used for beat classification with the classification accuracy of ∼99.5% through a KNN classifier. Another main advantage of this method is its robustness to noise, which is illustrated in this paper through experimental results. Furthermore, principal component analysis (PCA) has been used for feature reduction, which reduces the number of features from 11 to 6 while retaining the high beat classification accuracy. Due to reduction in computational complexity (using six features, the time required is ∼4 ms per beat), a simple classifier and noise robustness (at 10 dB signal-to-noise ratio, accuracy is 95%), this method offers substantial advantages over previous techniques for implementation in a practical ECG analyzer

  13. Prediction of Effective Drug Combinations by Chemical Interaction, Protein Interaction and Target Enrichment of KEGG Pathways

    Directory of Open Access Journals (Sweden)

    Lei Chen

    2013-01-01

    Full Text Available Drug combinatorial therapy could be more effective in treating some complex diseases than single agents due to better efficacy and reduced side effects. Although some drug combinations are being used, their underlying molecular mechanisms are still poorly understood. Therefore, it is of great interest to deduce a novel drug combination by their molecular mechanisms in a robust and rigorous way. This paper attempts to predict effective drug combinations by a combined consideration of: (1 chemical interaction between drugs, (2 protein interactions between drugs’ targets, and (3 target enrichment of KEGG pathways. A benchmark dataset was constructed, consisting of 121 confirmed effective combinations and 605 random combinations. Each drug combination was represented by 465 features derived from the aforementioned three properties. Some feature selection techniques, including Minimum Redundancy Maximum Relevance and Incremental Feature Selection, were adopted to extract the key features. Random forest model was built with its performance evaluated by 5-fold cross-validation. As a result, 55 key features providing the best prediction result were selected. These important features may help to gain insights into the mechanisms of drug combinations, and the proposed prediction model could become a useful tool for screening possible drug combinations.

  14. Combined Heuristic Attack Strategy on Complex Networks

    Directory of Open Access Journals (Sweden)

    Marek Šimon

    2017-01-01

    Full Text Available Usually, the existence of a complex network is considered an advantage feature and efforts are made to increase its robustness against an attack. However, there exist also harmful and/or malicious networks, from social ones like spreading hoax, corruption, phishing, extremist ideology, and terrorist support up to computer networks spreading computer viruses or DDoS attack software or even biological networks of carriers or transport centers spreading disease among the population. New attack strategy can be therefore used against malicious networks, as well as in a worst-case scenario test for robustness of a useful network. A common measure of robustness of networks is their disintegration level after removal of a fraction of nodes. This robustness can be calculated as a ratio of the number of nodes of the greatest remaining network component against the number of nodes in the original network. Our paper presents a combination of heuristics optimized for an attack on a complex network to achieve its greatest disintegration. Nodes are deleted sequentially based on a heuristic criterion. Efficiency of classical attack approaches is compared to the proposed approach on Barabási-Albert, scale-free with tunable power-law exponent, and Erdős-Rényi models of complex networks and on real-world networks. Our attack strategy results in a faster disintegration, which is counterbalanced by its slightly increased computational demands.

  15. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  16. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    Science.gov (United States)

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing.

  17. Robust methods for data reduction

    CERN Document Server

    Farcomeni, Alessio

    2015-01-01

    Robust Methods for Data Reduction gives a non-technical overview of robust data reduction techniques, encouraging the use of these important and useful methods in practical applications. The main areas covered include principal components analysis, sparse principal component analysis, canonical correlation analysis, factor analysis, clustering, double clustering, and discriminant analysis.The first part of the book illustrates how dimension reduction techniques synthesize available information by reducing the dimensionality of the data. The second part focuses on cluster and discriminant analy

  18. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    Science.gov (United States)

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  19. Robust hydraulic position controller by a fuzzy state controller

    International Nuclear Information System (INIS)

    Zhao, T.; Van der Wal, A.J.

    1994-01-01

    In nuclear industry, one of the most important design considerations of controllers is their robustness. Robustness in this context is defined as the ability of a system to be controlled in a stable way over a wide range of system parameters. Generally the systems to be controlled are linearized, and stability is subsequently proven for this idealized system. By combining classical control theory and fuzzy set theory, a new kind of state controller is proposed and successfully applied to a hydraulic position servo with excellent robustness against variation of system parameters

  20. Robust Position Control of Electro-mechanical Systems

    OpenAIRE

    Rong Mei; Mou Chen

    2013-01-01

    In this work, the robust position control scheme is proposed for the electro-mechanical system using the disturbance observer and backstepping control method. To the external unknown load of the electro-mechanical system, the nonlinear disturbance observer is given to estimate the external unknown load. Combining the output of the developed nonlinear disturbance observer with backstepping technology, the robust position control scheme is proposed for the electro-mechanical system. The stabili...

  1. Robust Forecasting of Non-Stationary Time Series

    OpenAIRE

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estima...

  2. Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization

    Directory of Open Access Journals (Sweden)

    Terumasa Aoki

    2018-01-01

    Full Text Available Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s are used as reference(s to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector; namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.

  3. Prostate cancer detection: Fusion of cytological and textural features

    Directory of Open Access Journals (Sweden)

    Kien Nguyen

    2011-01-01

    Full Text Available A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i to locate cancer regions in a large digitized tissue biopsy, and (ii to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000x7,000 pixels and 1 1 whole-slide test images (each of which has approximately 5,000x23,000 pixels. All images are at 20X magnification.

  4. Prostate cancer detection: Fusion of cytological and textural features.

    Science.gov (United States)

    Nguyen, Kien; Jain, Anil K; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.

  5. Birth control pills - combination

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/patientinstructions/000655.htm Birth control pills - combination To use the sharing features on ... both progestin and estrogen. What Are Combination Birth Control Pills? Birth control pills help keep you from ...

  6. Designing Phononic Crystals with Wide and Robust Band Gaps

    Science.gov (United States)

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang; Wang, Lifeng

    2018-04-01

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  7. Designing Phononic Crystals with Wide and Robust Band Gaps

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yanyu [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jia, Zian [State University of New York at Stony Brook; Yang, Haoxiang [State University of New York at Stony Brook; Wang, Lifeng [State University of New York at Stony Brook

    2018-04-16

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  8. Robust optimization in simulation : Taguchi and Krige combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2012-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a “robust” methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  9. Contributions to robust methods of creep analysis

    International Nuclear Information System (INIS)

    Penny, B.K.

    1991-01-01

    Robust methods for the predictions of deformations and lifetimes of components operating in the creep range are presented. The ingredients used for this are well-tried numerical techniques combined with the concepts of continuum damage and so-called reference stresses. The methods described are derived in order to obtain the maximum benefit during the early stages of design where broad assessments of the influences of material choice, loadings and geometry need to be made quickly and with economical use of computers. It is also intended that the same methods will be of value during operation if estimates of damage or if exercises in life extension or inspection timing are required. (orig.)

  10. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    Science.gov (United States)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  11. Combined laser and atomic force microscope lithography on aluminum: Mask fabrication for nanoelectromechanical systems

    DEFF Research Database (Denmark)

    Berini, Abadal Gabriel; Boisen, Anja; Davis, Zachary James

    1999-01-01

    A direct-write laser system and an atomic force microscope (AFM) are combined to modify thin layers of aluminum on an oxidized silicon substrate, in order to fabricate conducting and robust etch masks with submicron features. These masks are very well suited for the production of nanoelectromecha......A direct-write laser system and an atomic force microscope (AFM) are combined to modify thin layers of aluminum on an oxidized silicon substrate, in order to fabricate conducting and robust etch masks with submicron features. These masks are very well suited for the production...... writing, and to perform submicron modifications by AFM oxidation. The mask fabrication for a nanoscale suspended resonator bridge is used to illustrate the advantages of this combined technique for NEMS. (C) 1999 American Institute of Physics. [S0003-6951(99)00221-1]....

  12. TAO-robust backpropagation learning algorithm.

    Science.gov (United States)

    Pernía-Espinoza, Alpha V; Ordieres-Meré, Joaquín B; Martínez-de-Pisón, Francisco J; González-Marcos, Ana

    2005-03-01

    In several fields, as industrial modelling, multilayer feedforward neural networks are often used as universal function approximations. These supervised neural networks are commonly trained by a traditional backpropagation learning format, which minimises the mean squared error (mse) of the training data. However, in the presence of corrupted data (outliers) this training scheme may produce wrong models. We combine the benefits of the non-linear regression model tau-estimates [introduced by Tabatabai, M. A. Argyros, I. K. Robust Estimation and testing for general nonlinear regression models. Applied Mathematics and Computation. 58 (1993) 85-101] with the backpropagation algorithm to produce the TAO-robust learning algorithm, in order to deal with the problems of modelling with outliers. The cost function of this approach has a bounded influence function given by the weighted average of two psi functions, one corresponding to a very robust estimate and the other to a highly efficient estimate. The advantages of the proposed algorithm are studied with an example.

  13. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  14. Featuring animacy

    Directory of Open Access Journals (Sweden)

    Elizabeth Ritter

    2015-01-01

    Full Text Available Algonquian languages are famous for their animacy-based grammatical properties—an animacy based noun classification system and direct/inverse system which gives rise to animacy hierarchy effects in the determination of verb agreement. In this paper I provide new evidence for the proposal that the distinctive properties of these languages is due to the use of participant-based features, rather than spatio-temporal ones, for both nominal and verbal functional categories (Ritter & Wiltschko 2009, 2014. Building on Wiltschko (2012, I develop a formal treatment of the Blackfoot aspectual system that assumes a category Inner Aspect (cf. MacDonald 2008, Travis 1991, 2010. Focusing on lexical aspect in Blackfoot, I demonstrate that the classification of both nouns (Seinsarten and verbs (Aktionsarten is based on animacy, rather than boundedness, resulting in a strikingly different aspectual system for both categories. 

  15. Muscle Synergy-Driven Robust Motion Control.

    Science.gov (United States)

    Min, Kyuengbo; Iwamoto, Masami; Kakei, Shinji; Kimpara, Hideyuki

    2018-04-01

    Humans are able to robustly maintain desired motion and posture under dynamically changing circumstances, including novel conditions. To accomplish this, the brain needs to optimize the synergistic control between muscles against external dynamic factors. However, previous related studies have usually simplified the control of multiple muscles using two opposing muscles, which are minimum actuators to simulate linear feedback control. As a result, they have been unable to analyze how muscle synergy contributes to motion control robustness in a biological system. To address this issue, we considered a new muscle synergy concept used to optimize the synergy between muscle units against external dynamic conditions, including novel conditions. We propose that two main muscle control policies synergistically control muscle units to maintain the desired motion against external dynamic conditions. Our assumption is based on biological evidence regarding the control of multiple muscles via the corticospinal tract. One of the policies is the group control policy (GCP), which is used to control muscle group units classified based on functional similarities in joint control. This policy is used to effectively resist external dynamic circumstances, such as disturbances. The individual control policy (ICP) assists the GCP in precisely controlling motion by controlling individual muscle units. To validate this hypothesis, we simulated the reinforcement of the synergistic actions of the two control policies during the reinforcement learning of feedback motion control. Using this learning paradigm, the two control policies were synergistically combined to result in robust feedback control under novel transient and sustained disturbances that did not involve learning. Further, by comparing our data to experimental data generated by human subjects under the same conditions as those of the simulation, we showed that the proposed synergy concept may be used to analyze muscle synergy

  16. Web-based thyroid imaging reporting and data system: Malignancy risk of atypia of undetermined significance or follicular lesion of undetermined significance thyroid nodules calculated by a combination of ultrasonography features and biopsy results.

    Science.gov (United States)

    Choi, Young Jun; Baek, Jung Hwan; Shin, Jung Hee; Shim, Woo Hyun; Kim, Seon-Ok; Lee, Won-Hong; Song, Dong Eun; Kim, Tae Yong; Chung, Ki-Wook; Lee, Jeong Hyun

    2018-05-13

    The purpose of this study was to construct a web-based predictive model using ultrasound characteristics and subcategorized biopsy results for thyroid nodules of atypia of undetermined significance/follicular lesion of undetermined significance (AUS/FLUS) to stratify the risk of malignancy. Data included 672 thyroid nodules from 656 patients from a historical cohort. We analyzed ultrasound images of thyroid nodules and biopsy results according to nuclear atypia and architectural atypia. Multivariate logistic regression analysis was performed to predict whether nodules were diagnosed as malignant or benign. The ultrasound features, including spiculated margin, marked hypoechogenicity, calcifications, biopsy results, and cytologic atypia, showed significant differences between groups. A 13-point risk scoring system was developed, and the area under the curve (AUC) of the receiver operating characteristic (ROC) curve of the development and validation sets were 0.837 and 0.830, respectively (http://www.gap.kr/thyroidnodule_b3.php). We devised a web-based predictive model using the combined information of ultrasound characteristics and biopsy results for AUS/FLUS thyroid nodules to stratify the malignant risk. © 2018 Wiley Periodicals, Inc.

  17. Feature Matching for SAR and Optical Images Based on Gaussian-Gamma-shaped Edge Strength Map

    Directory of Open Access Journals (Sweden)

    CHEN Min

    2016-03-01

    Full Text Available A matching method for SAR and optical images, robust to pixel noise and nonlinear grayscale differences, is presented. Firstly, a rough correction to eliminate rotation and scale change between images is performed. Secondly, features robust to speckle noise of SAR image are detected by improving the original phase congruency based method. Then, feature descriptors are constructed on the Gaussian-Gamma-shaped edge strength map according to the histogram of oriented gradient pattern. Finally, descriptor similarity and geometrical relationship are combined to constrain the matching processing.The experimental results demonstrate that the proposed method provides significant improvement in correct matches number and image registration accuracy compared with other traditional methods.

  18. Advances in robust fractional control

    CERN Document Server

    Padula, Fabrizio

    2015-01-01

    This monograph presents design methodologies for (robust) fractional control systems. It shows the reader how to take advantage of the superior flexibility of fractional control systems compared with integer-order systems in achieving more challenging control requirements. There is a high degree of current interest in fractional systems and fractional control arising from both academia and industry and readers from both milieux are catered to in the text. Different design approaches having in common a trade-off between robustness and performance of the control system are considered explicitly. The text generalizes methodologies, techniques and theoretical results that have been successfully applied in classical (integer) control to the fractional case. The first part of Advances in Robust Fractional Control is the more industrially-oriented. It focuses on the design of fractional controllers for integer processes. In particular, it considers fractional-order proportional-integral-derivative controllers, becau...

  19. Robustness of digital artist authentication

    DEFF Research Database (Denmark)

    Jacobsen, Robert; Nielsen, Morten

    In many cases it is possible to determine the authenticity of a painting from digital reproductions of the paintings; this has been demonstrated for a variety of artists and with different approaches. Common to all these methods in digital artist authentication is that the potential of the method...... is in focus, while the robustness has not been considered, i.e. the degree to which the data collection process influences the decision of the method. However, in order for an authentication method to be successful in practice, it needs to be robust to plausible error sources from the data collection....... In this paper we investigate the robustness of the newly proposed authenticity method introduced by the authors based on second generation multiresolution analysis. This is done by modelling a number of realistic factors that can occur in the data collection....

  20. Attractive ellipsoids in robust control

    CERN Document Server

    Poznyak, Alexander; Azhmyakov, Vadim

    2014-01-01

    This monograph introduces a newly developed robust-control design technique for a wide class of continuous-time dynamical systems called the “attractive ellipsoid method.” Along with a coherent introduction to the proposed control design and related topics, the monograph studies nonlinear affine control systems in the presence of uncertainty and presents a constructive and easily implementable control strategy that guarantees certain stability properties. The authors discuss linear-style feedback control synthesis in the context of the above-mentioned systems. The development and physical implementation of high-performance robust-feedback controllers that work in the absence of complete information is addressed, with numerous examples to illustrate how to apply the attractive ellipsoid method to mechanical and electromechanical systems. While theorems are proved systematically, the emphasis is on understanding and applying the theory to real-world situations. Attractive Ellipsoids in Robust Control will a...

  1. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  2. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  3. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  4. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    to be applicable in organisations assigning a high importance to one or more factors that are known to be impacted by RD, while also experiencing a high level of occurrence of this factor. The RDAM supplements existing maturity models and metrics to provide a comprehensive set of data to support management......This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities...

  5. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  6. Bread crumb classification using fractal and multifractal features

    OpenAIRE

    Baravalle, Rodrigo Guillermo; Delrieux, Claudio Augusto; Gómez, Juan Carlos

    2017-01-01

    Adequate image descriptors are fundamental in image classification and object recognition. Main requirements for image features are robustness and low dimensionality which would lead to low classification errors in a variety of situations and with a reasonable computational cost. In this context, the identification of materials poses a significant challenge, since typical (geometric and/or differential) feature extraction methods are not robust enough. Texture features based on Fourier or wav...

  7. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  8. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  9. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    Science.gov (United States)

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  10. A robust data-driven approach for gene ontology annotation.

    Science.gov (United States)

    Li, Yanpeng; Yu, Hong

    2014-01-01

    Gene ontology (GO) and GO annotation are important resources for biological information management and knowledge discovery, but the speed of manual annotation became a major bottleneck of database curation. BioCreative IV GO annotation task aims to evaluate the performance of system that automatically assigns GO terms to genes based on the narrative sentences in biomedical literature. This article presents our work in this task as well as the experimental results after the competition. For the evidence sentence extraction subtask, we built a binary classifier to identify evidence sentences using reference distance estimator (RDE), a recently proposed semi-supervised learning method that learns new features from around 10 million unlabeled sentences, achieving an F1 of 19.3% in exact match and 32.5% in relaxed match. In the post-submission experiment, we obtained 22.1% and 35.7% F1 performance by incorporating bigram features in RDE learning. In both development and test sets, RDE-based method achieved over 20% relative improvement on F1 and AUC performance against classical supervised learning methods, e.g. support vector machine and logistic regression. For the GO term prediction subtask, we developed an information retrieval-based method to retrieve the GO term most relevant to each evidence sentence using a ranking function that combined cosine similarity and the frequency of GO terms in documents, and a filtering method based on high-level GO classes. The best performance of our submitted runs was 7.8% F1 and 22.2% hierarchy F1. We found that the incorporation of frequency information and hierarchy filtering substantially improved the performance. In the post-submission evaluation, we obtained a 10.6% F1 using a simpler setting. Overall, the experimental analysis showed our approaches were robust in both the two tasks. © The Author(s) 2014. Published by Oxford University Press.

  11. Infrared face recognition based on LBP histogram and KW feature selection

    Science.gov (United States)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  12. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  13. Essays on robust asset pricing

    NARCIS (Netherlands)

    Horváth, Ferenc

    2017-01-01

    The central concept of this doctoral dissertation is robustness. I analyze how model and parameter uncertainty affect financial decisions of investors and fund managers, and what their equilibrium consequences are. Chapter 1 gives an overview of the most important concepts and methodologies used in

  14. Robust visual hashing via ICA

    International Nuclear Information System (INIS)

    Fournel, Thierry; Coltuc, Daniela

    2010-01-01

    Designed to maximize information transmission in the presence of noise, independent component analysis (ICA) could appear in certain circumstances as a statistics-based tool for robust visual hashing. Several ICA-based scenarios can attempt to reach this goal. A first one is here considered.

  15. Robustness of raw quantum tomography

    Science.gov (United States)

    Asorey, M.; Facchi, P.; Florio, G.; Man'ko, V. I.; Marmo, G.; Pascazio, S.; Sudarshan, E. C. G.

    2011-01-01

    We scrutinize the effects of non-ideal data acquisition on the tomograms of quantum states. The presence of a weight function, schematizing the effects of a finite window or equivalently noise, only affects the state reconstruction procedure by a normalization constant. The results are extended to a discrete mesh and show that quantum tomography is robust under incomplete and approximate knowledge of tomograms.

  16. Robustness of raw quantum tomography

    Energy Technology Data Exchange (ETDEWEB)

    Asorey, M. [Departamento de Fisica Teorica, Facultad de Ciencias, Universidad de Zaragoza, 50009 Zaragoza (Spain); Facchi, P. [Dipartimento di Matematica, Universita di Bari, I-70125 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Florio, G. [Dipartimento di Fisica, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Man' ko, V.I., E-mail: manko@lebedev.r [P.N. Lebedev Physical Institute, Leninskii Prospect 53, Moscow 119991 (Russian Federation); Marmo, G. [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , I-80126 Napoli (Italy); INFN, Sezione di Napoli, I-80126 Napoli (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Pascazio, S. [Dipartimento di Fisica, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Sudarshan, E.C.G. [Department of Physics, University of Texas, Austin, TX 78712 (United States)

    2011-01-31

    We scrutinize the effects of non-ideal data acquisition on the tomograms of quantum states. The presence of a weight function, schematizing the effects of a finite window or equivalently noise, only affects the state reconstruction procedure by a normalization constant. The results are extended to a discrete mesh and show that quantum tomography is robust under incomplete and approximate knowledge of tomograms.

  17. Aspects of robust linear regression

    NARCIS (Netherlands)

    Davies, P.L.

    1993-01-01

    Section 1 of the paper contains a general discussion of robustness. In Section 2 the influence function of the Hampel-Rousseeuw least median of squares estimator is derived. Linearly invariant weak metrics are constructed in Section 3. It is shown in Section 4 that $S$-estimators satisfy an exact

  18. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  19. Robustness Regions for Dichotomous Decisions.

    Science.gov (United States)

    Vijn, Pieter; Molenaar, Ivo W.

    1981-01-01

    In the case of dichotomous decisions, the total set of all assumptions/specifications for which the decision would have been the same is the robustness region. Inspection of this (data-dependent) region is a form of sensitivity analysis which may lead to improved decision making. (Author/BW)

  20. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, althou...

  1. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  2. Facial Symmetry in Robust Anthropometrics

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 57, č. 3 (2012), s. 691-698 ISSN 0022-1198 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : forensic science * anthropology * robust image analysis * correlation analysis * multivariate data * classification Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.244, year: 2012

  3. Sparse and Robust Factor Modelling

    NARCIS (Netherlands)

    C. Croux (Christophe); P. Exterkate (Peter)

    2011-01-01

    textabstractFactor construction methods are widely used to summarize a large panel of variables by means of a relatively small number of representative factors. We propose a novel factor construction procedure that enjoys the properties of robustness to outliers and of sparsity; that is, having

  4. Robust distributed cognitive relay beamforming

    KAUST Repository

    Pandarakkottilil, Ubaidulla; Aissa, Sonia

    2012-01-01

    design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can

  5. Approximability of Robust Network Design

    NARCIS (Netherlands)

    Olver, N.K.; Shepherd, F.B.

    2014-01-01

    We consider robust (undirected) network design (RND) problems where the set of feasible demands may be given by an arbitrary convex body. This model, introduced by Ben-Ameur and Kerivin [Ben-Ameur W, Kerivin H (2003) New economical virtual private networks. Comm. ACM 46(6):69-73], generalizes the

  6. Damage detection using piezoelectric transducers and the Lamb wave approach: II. Robust and quantitative decision making

    International Nuclear Information System (INIS)

    Lu, Y; Wang, X; Tang, J; Ding, Y

    2008-01-01

    The propagation of Lamb waves generated by piezoelectric transducers in a one-dimensional structure has been studied comprehensively in part I of this two-paper series. Using the information embedded in the propagating waveforms, we expect to make a decision on whether damage has occurred; however, environmental and operational variances inevitably complicate the problem. To better detect the damage under these variances, we present in this paper a robust and quantitative decision-making methodology involving advanced signal processing and statistical analysis. In order to statistically evaluate the features in Lamb wave propagation in the presence of noise, we collect multiple time series (baseline signals) from the undamaged beam. A combination of the improved adaptive harmonic wavelet transform (AHWT) and the principal component analysis (PCA) is performed on the baseline signals to highlight the critical features of Lamb wave propagation in the undamaged structure. The detection of damage is facilitated by comparing the features of the test signal collected from the test structure (damaged or undamaged) with the features of the baseline signals. In this process, we employ Hotelling's T 2 statistical analysis to first purify the baseline dataset and then to quantify the deviation of the test data vector from the baseline dataset. Through experimental and numerical studies, we systematically investigate the proposed methodology in terms of the detectability (capability of detecting damage), the sensitivity (with respect to damage severity and excitation frequency) and the robustness against noises. The parametric studies also validate, from the signal processing standpoint, the guidelines of Lamb-wave-based damage detection developed in part I

  7. Robust Optimal Design of Quantum Electronic Devices

    Directory of Open Access Journals (Sweden)

    Ociel Morales

    2018-01-01

    Full Text Available We consider the optimal design of a sequence of quantum barriers, in order to manufacture an electronic device at the nanoscale such that the dependence of its transmission coefficient on the bias voltage is linear. The technique presented here is easily adaptable to other response characteristics. There are two distinguishing features of our approach. First, the transmission coefficient is determined using a semiclassical approximation, so we can explicitly compute the gradient of the objective function. Second, in contrast with earlier treatments, manufacturing uncertainties are incorporated in the model through random variables; the optimal design problem is formulated in a probabilistic setting and then solved using a stochastic collocation method. As a measure of robustness, a weighted sum of the expectation and the variance of a least-squares performance metric is considered. Several simulations illustrate the proposed technique, which shows an improvement in accuracy over 69% with respect to brute-force, Monte-Carlo-based methods.

  8. Correlation Between Squamous Cell Carcinoma Antigen Level and the Clinicopathological Features of Early-Stage Cervical Squamous Cell Carcinoma and the Predictive Value of Squamous Cell Carcinoma Antigen Combined With Computed Tomography Scan for Lymph Node Metastasis.

    Science.gov (United States)

    Xu, Dianbo; Wang, Danbo; Wang, Shuo; Tian, Ye; Long, Zaiqiu; Ren, Xuemei

    2017-11-01

    The aim of this study was to analyze the relationship between serum squamous cell carcinoma antigen (SCC-Ag) and the clinicopathological features of cervical squamous cell carcinoma. The value of SCC-Ag and computed tomography (CT) for predicting lymph node metastasis (LNM) was evaluated. A total of 197 patients with International Federation of Gynecology and Obstetrics stages IB to IIA cervical squamous cell carcinoma who underwent radical surgery were enrolled in this study. The SCC-Ag was measured, and CT scans were used for the preoperative assessment of lymph node status. Increased preoperative SCC-Ag levels were associated with International Federation of Gynecology and Obstetrics stage (P = 0.001), tumor diameter of greater than 4 cm (P 4 cm (P = 0.001, OR = 4.019), and greater than one half stromal infiltration (P = 0.002, OR = 3.680) as independent factors affecting SCC-Ag greater than or equal to 2.35 ng/mL. In the analysis of LNM, SCC-Ag greater than or equal to 2.35 ng/mL (P < 0.001, OR = 4.825) was an independent factor for LNM. The area under the receiver operator characteristic curve (AUC) of SCC-Ag was 0.763 for all patients, and 0.805 and 0.530 for IB1 + IIA1 and IB2 + IIA2 patients, respectively; 2.35 ng/mL was the optimum cutoff for predicting LNM. The combination of CT and SCC-Ag showed a sensitivity and specificity of 82.9% and 66% in parallel tests, and 29.8% and 93.3% in serial tests, respectively. The increase of SCC-Ag level in the preoperative phase means that there may be a pathological risk factor for postoperative outcomes. The SCC-Ag (≥2.35 ng/mL) may be a useful marker for predicting LNM of cervical cancer, especially in stages IB1 and IIA1, and the combination of SCC-Ag and CT may help identify patients with LNM to provide them with the most appropriate therapeutic approach.

  9. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined 18F-FDG PET/MR imaging

    International Nuclear Information System (INIS)

    Hyafil, Fabien; Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias; Sepp, Dominik; Hoehn, Sabine; Poppert, Holger; Bayer-Karpinska, Anna; Boeckh-Behrens, Tobias; Hacker, Marcus; Nekolla, Stephan G.; Rominger, Axel; Dichgans, Martin; Schwaiger, Markus

    2016-01-01

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and 18 F-fluoro-deoxyglucose positron emission tomography ( 18 F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of 18 F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. 18 F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher 18 F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher 18 F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with 18 F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral to the stroke, suggesting a causal

  10. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined {sup 18}F-FDG PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Hyafil, Fabien [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Bichat University Hospital, Department of Nuclear Medicine, Paris (France); Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias [Ludwig Maximilians University Hospital Munich, Institute for Clinical Radiology, Munich (Germany); Sepp, Dominik; Hoehn, Sabine; Poppert, Holger [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Bayer-Karpinska, Anna [Ludwig Maximilians University Hospital Munich, Institute for Stroke and Dementia Research, Munich (Germany); Boeckh-Behrens, Tobias [Technische Universitaet Muenchen, Department of Neuroradiology, Klinikum Rechts der Isar, Munich (Germany); Hacker, Marcus [Medical University of Vienna, Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Nekolla, Stephan G. [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Partner Site Munich Heart Alliance, German Centre for Cardiovascular Research (DZHK), Munich (Germany); Rominger, Axel [Ludwig Maximilians University Hospital Munich, Department of Nuclear Medicine, Munich (Germany); Dichgans, Martin [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Munich Cluster of Systems Neurology (SyNergy), Munich (Germany); Schwaiger, Markus [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany)

    2016-02-15

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and {sup 18}F-fluoro-deoxyglucose positron emission tomography ({sup 18}F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of {sup 18}F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. {sup 18}F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher {sup 18}F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher {sup 18}F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with {sup 18}F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral

  11. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  12. Robustness of public choice models of voting behavior

    Directory of Open Access Journals (Sweden)

    Mihai UNGUREANU

    2013-05-01

    Full Text Available Modern economics modeling practice involves highly unrealistic assumptions. Since testing such models is not always an easy enterprise, researchers face the problem of determining whether a result is dependent (or not on the unrealistic details of the model. A solution for this problem is conducting robustness analysis. In its classical form, robustness analysis is a non-empirical method of confirmation – it raises our trust in a given result by implying it with from several different models. In this paper I argue that robustness analysis could be thought as a method of post-empirical failure. This form of robustness analysis involves assigning guilt for the empirical failure to a certain part of the model. Starting from this notion of robustness, I analyze a case of empirical failure from public choice theory or the economic approach of politics. Using the fundamental methodological principles of neoclassical economics, the first model of voting behavior implied that almost no one would vote. This was clearly an empirical failure. Public choice scholars faced the problem of either restraining the domain of their discipline or giving up to some of their neoclassical methodological features. The second solution was chosen and several different models of voting behavior were built. I will treat these models as a case for performing robustness analysis and I will determine which assumption from the original model is guilty for the empirical failure.

  13. Universal features of multiplicity distributions

    International Nuclear Information System (INIS)

    Balantekin, A.B.; Washington Univ., Seattle, WA

    1994-01-01

    Universal features of multiplicity distributions are studied and combinants, certain linear combinations of ratios of probabilities, are introduced. It is argued that they can be a useful tool in analyzing multiplicity distributions of hadrons emitted in high energy collisions and large scale structure of galaxy distributions

  14. Water resources planning under climate change: Assessing the robustness of real options for the Blue Nile

    Science.gov (United States)

    Jeuland, Marc; Whittington, Dale

    2014-03-01

    This article presents a methodology for planning new water resources infrastructure investments and operating strategies in a world of climate change uncertainty. It combines a real options (e.g., options to defer, expand, contract, abandon, switch use, or otherwise alter a capital investment) approach with principles drawn from robust decision-making (RDM). RDM comprises a class of methods that are used to identify investment strategies that perform relatively well, compared to the alternatives, across a wide range of plausible future scenarios. Our proposed framework relies on a simulation model that includes linkages between climate change and system hydrology, combined with sensitivity analyses that explore how economic outcomes of investments in new dams vary with forecasts of changing runoff and other uncertainties. To demonstrate the framework, we consider the case of new multipurpose dams along the Blue Nile in Ethiopia. We model flexibility in design and operating decisions—the selection, sizing, and sequencing of new dams, and reservoir operating rules. Results show that there is no single investment plan that performs best across a range of plausible future runoff conditions. The decision-analytic framework is then used to identify dam configurations that are both robust to poor outcomes and sufficiently flexible to capture high upside benefits if favorable future climate and hydrological conditions should arise. The approach could be extended to explore design and operating features of development and adaptation projects other than dams.

  15. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  16. Robust surface roughness indices and morphological interpretation

    Science.gov (United States)

    Trevisani, Sebastiano; Rocca, Michele

    2016-04-01

    Geostatistical-based image/surface texture indices based on variogram (Atkison and Lewis, 2000; Herzfeld and Higginson, 1996; Trevisani et al., 2012) and on its robust variant MAD (median absolute differences, Trevisani and Rocca, 2015) offer powerful tools for the analysis and interpretation of surface morphology (potentially not limited to solid earth). In particular, the proposed robust index (Trevisani and Rocca, 2015) with its implementation based on local kernels permits the derivation of a wide set of robust and customizable geomorphometric indices capable to outline specific aspects of surface texture. The stability of MAD in presence of signal noise and abrupt changes in spatial variability is well suited for the analysis of high-resolution digital terrain models. Moreover, the implementation of MAD by means of a pixel-centered perspective based on local kernels, with some analogies to the local binary pattern approach (Lucieer and Stein, 2005; Ojala et al., 2002), permits to create custom roughness indices capable to outline different aspects of surface roughness (Grohmann et al., 2011; Smith, 2015). In the proposed poster, some potentialities of the new indices in the context of geomorphometry and landscape analysis will be presented. At same time, challenges and future developments related to the proposed indices will be outlined. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Grohmann, C.H., Smith, M.J., Riccomini, C., 2011. Multiscale Analysis of Topographic Surface Roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing 49, 1220-1213. Herzfeld, U.C., Higginson, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22 (1), pp. 35-52. Lucieer, A., Stein, A., 2005. Texture-based landform segmentation of LiDAR imagery

  17. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  18. Robust Decentralized Formation Flight Control

    Directory of Open Access Journals (Sweden)

    Zhao Weihua

    2011-01-01

    Full Text Available Motivated by the idea of multiplexed model predictive control (MMPC, this paper introduces a new framework for unmanned aerial vehicles (UAVs formation flight and coordination. Formulated using MMPC approach, the whole centralized formation flight system is considered as a linear periodic system with control inputs of each UAV subsystem as its periodic inputs. Divided into decentralized subsystems, the whole formation flight system is guaranteed stable if proper terminal cost and terminal constraints are added to each decentralized MPC formulation of the UAV subsystem. The decentralized robust MPC formulation for each UAV subsystem with bounded input disturbances and model uncertainties is also presented. Furthermore, an obstacle avoidance control scheme for any shape and size of obstacles, including the nonapriorily known ones, is integrated under the unified MPC framework. The results from simulations demonstrate that the proposed framework can successfully achieve robust collision-free formation flights.

  19. Uyghur face recognition method combining 2DDCT with POEM

    Science.gov (United States)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  20. Inefficient but robust public leadership.

    OpenAIRE

    Matsumura, Toshihiro; Ogawa, Akira

    2014-01-01

    We investigate endogenous timing in a mixed duopoly in a differentiated product market. We find that private leadership is better than public leadership from a social welfare perspective if the private firm is domestic, regardless of the degree of product differentiation. Nevertheless, the public leadership equilibrium is risk-dominant, and it is thus robust if the degree of product differentiation is high. We also find that regardless of the degree of product differentiation, the public lead...

  1. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  2. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  3. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  4. Robust stochastic fuzzy possibilistic programming for environmental decision making under uncertainty

    International Nuclear Information System (INIS)

    Zhang, Xiaodong; Huang, Guo H.; Nie, Xianghui

    2009-01-01

    Nonpoint source (NPS) water pollution is one of serious environmental issues, especially within an agricultural system. This study aims to propose a robust chance-constrained fuzzy possibilistic programming (RCFPP) model for water quality management within an agricultural system, where solutions for farming area, manure/fertilizer application amount, and livestock husbandry size under different scenarios are obtained and interpreted. Through improving upon the existing fuzzy possibilistic programming, fuzzy robust programming and chance-constrained programming approaches, the RCFPP can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original fuzzy constraints, the RCFPP enhances the robustness of the optimization processes and resulting solutions. The results of the case study indicate that useful information can be obtained through the proposed RCFPP model for providing feasible decision schemes for different agricultural activities under different scenarios (combinations of different p-necessity and p i levels). A p-necessity level represents the certainty or necessity degree of the imprecise objective function, while a p i level means the probabilities at which the constraints will be violated. A desire to acquire high agricultural income would decrease the certainty degree of the event that maximization of the objective be satisfied, and potentially violate water management standards; willingness to accept low agricultural income will run into the risk of potential system failure. The decision variables under combined p-necessity and p i levels were useful for the decision makers to justify and/or adjust the decision schemes for the agricultural activities through incorporation of their implicit knowledge. The results also suggest that

  5. On the Robustness of Poverty Predictors

    DEFF Research Database (Denmark)

    Arndt, Channing; Nhate, Virgulino; Silva, Patricia Castro Da

    Monitoring of poverty requires timely household budget data. However, such data are not available as frequently as needed for policy purposes. Recently, statistical methods have emerged to predict poverty overtime by combining detailed household consumption and expenditure data with more frequent...... data collected from other surveys. In this paper we compare poverty predictions for Mozambique using different source data to test the robustness of the predicted poverty statistics. A critical element in this exercise of predicting poverty overtime is the stability of the parameters that determine...... household consumption. We find that the assumption of stable consumption determinants does not hold for Mozambique during the time period examined. We also examine what drives the resulting predicted poverty statistics. The paper then considers the policy implications of these findings for Mozambique...

  6. Surface-Supported Robust 2D Lanthanide-Carboxylate Coordination Networks.

    Science.gov (United States)

    Urgel, José I; Cirera, Borja; Wang, Yang; Auwärter, Willi; Otero, Roberto; Gallego, José M; Alcamí, Manuel; Klyatskaya, Svetlana; Ruben, Mario; Martín, Fernando; Miranda, Rodolfo; Ecija, David; Barth, Johannes V

    2015-12-16

    Lanthanide-based metal-organic compounds and architectures are promising systems for sensing, heterogeneous catalysis, photoluminescence, and magnetism. Herein, the fabrication of interfacial 2D lanthanide-carboxylate networks is introduced. This study combines low- and variable-temperature scanning tunneling microscopy (STM) and X-ray photoemission spectroscopy (XPS) experiments, and density functional theory (DFT) calculations addressing their design and electronic properties. The bonding of ditopic linear linkers to Gd centers on a Cu(111) surface gives rise to extended nanoporous grids, comprising mononuclear nodes featuring eightfold lateral coordination. XPS and DFT elucidate the nature of the bond, indicating ionic characteristics, which is also manifest in appreciable thermal stability. This study introduces a new generation of robust low-dimensional metallosupramolecular systems incorporating the functionalities of the f-block elements. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Robust frameless stereotactic localization in extra-cranial radiotherapy

    International Nuclear Information System (INIS)

    Riboldi, Marco; Baroni, Guido; Spadea, Maria Francesca; Bassanini, Fabio; Tagaste, Barbara; Garibaldi, Cristina; Orecchia, Roberto; Pedotti, Antonio

    2006-01-01

    In the field of extra-cranial radiotherapy, several inaccuracies can make the application of frameless stereotactic localization techniques error-prone. When optical tracking systems based on surface fiducials are used, inter- and intra-fractional uncertainties in marker three-dimensional (3D) detection may lead to inexact tumor position estimation, resulting in erroneous patient setup. This is due to the fact that external fiducials misdetection results in deformation effects that are poorly handled in a rigid-body approach. In this work, the performance of two frameless stereotactic localization algorithms for 3D tumor position reconstruction in extra-cranial radiotherapy has been specifically tested. Two strategies, unweighted versus weighted, for stereotactic tumor localization were examined by exploiting data coming from 46 patients treated for extra-cranial lesions. Measured isocenter displacements and rotations were combined to define isocentric procedures, featuring 6 degrees of freedom, for correcting patient alignment (isocentric positioning correction). The sensitivity of the algorithms to uncertainties in the 3D localization of fiducials was investigated by means of 184 numerical simulations. The performance of the implemented isocentric positioning correction was compared to conventional point-based registration. The isocentric positioning correction algorithm was tested on a clinical dataset of inter-fractional and intra-fractional setup errors, which was collected by means of an optical tracker on the same group of patients. The weighted strategy exhibited a lower sensitivity to fiducial localization errors in simulated misalignments than those of the unweighted strategy. Isocenter 3D displacements provided by the weighted strategy were consistently smaller than those featured by the unweighted strategy. The peak decrease in median and quartile values of isocenter 3D displacements were 1.4 and 2.7 mm, respectively. Concerning clinical data, the

  8. A robust multilevel simultaneous eigenvalue solver

    Science.gov (United States)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  9. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  10. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images

    Science.gov (United States)

    Gong, Maoguo; Yang, Hailun; Zhang, Puzhao

    2017-07-01

    Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.

  11. Soliton robustness in optical fibers

    International Nuclear Information System (INIS)

    Menyuk, C.R.

    1993-01-01

    Simulations and experiments indicate that solitons in optical fibers are robust in the presence of Hamiltonian deformations such as higher-order dispersion and birefringence but are destroyed in the presence of non-Hamiltonian deformations such as attenuation and the Raman effect. Two hypotheses are introduced that generalize these observations and give a recipe for when deformations will be Hamiltonian. Concepts from nonlinear dynamics are used to make these two hypotheses plausible. Soliton stabilization with frequency filtering is also briefly discussed from this point of view

  12. Robust and Sparse Factor Modelling

    DEFF Research Database (Denmark)

    Croux, Christophe; Exterkate, Peter

    Factor construction methods are widely used to summarize a large panel of variables by means of a relatively small number of representative factors. We propose a novel factor construction procedure that enjoys the properties of robustness to outliers and of sparsity; that is, having relatively few...... nonzero factor loadings. Compared to the traditional factor construction method, we find that this procedure leads to a favorable forecasting performance in the presence of outliers and to better interpretable factors. We investigate the performance of the method in a Monte Carlo experiment...

  13. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; čizmar, D.

    2010-01-01

    The present paper outlines results from working group 3 (WG3) in the EU COST Action E55 – ‘Modelling of the performance of timber structures’. The objectives of the project are related to the three main research activities: the identification and modelling of relevant load and environmental...... exposure scenarios, the improvement of knowledge concerning the behaviour of timber structural elements and the development of a generic framework for the assessment of the life-cycle vulnerability and robustness of timber structures....

  14. Sustainable Resilient, Robust & Resplendent Enterprises

    DEFF Research Database (Denmark)

    Edgeman, Rick

    to their impact. Resplendent enterprises are introduced with resplendence referring not to some sort of public or private façade, but instead refers to organizations marked by dual brilliance and nobility of strategy, governance and comportment that yields superior and sustainable triple bottom line performance....... Herein resilience, robustness, and resplendence (R3) are integrated with sustainable enterprise excellence (Edgeman and Eskildsen, 2013) or SEE and social-ecological innovation (Eskildsen and Edgeman, 2012) to aid progress of a firm toward producing continuously relevant performance that proceed from...

  15. Adaptive robust control of the EBR-II reactor

    International Nuclear Information System (INIS)

    Power, M.A.; Edwards, R.M.

    1996-01-01

    Simulation results are presented for an adaptive H ∞ controller, a fixed H ∞ controller, and a classical controller. The controllers are applied to a simulation of the Experimental Breeder Reactor II primary system. The controllers are tested for the best robustness and performance by step-changing the demanded reactor power and by varying the combined uncertainty in initial reactor power and control rod worth. The adaptive H ∞ controller shows the fastest settling time, fastest rise time and smallest peak overshoot when compared to the fixed H ∞ and classical controllers. This makes for a superior and more robust controller

  16. Efficient and robust gradient enhanced Kriging emulators.

    Energy Technology Data Exchange (ETDEWEB)

    Dalbey, Keith R.

    2013-08-01

    %E2%80%9CNaive%E2%80%9D or straight-forward Kriging implementations can often perform poorly in practice. The relevant features of the robustly accurate and efficient Kriging and Gradient Enhanced Kriging (GEK) implementations in the DAKOTA software package are detailed herein. The principal contribution is a novel, effective, and efficient approach to handle ill-conditioning of GEK's %E2%80%9Ccorrelation%E2%80%9D matrix, RN%CC%83, based on a pivoted Cholesky factorization of Kriging's (not GEK's) correlation matrix, R, which is a small sub-matrix within GEK's RN%CC%83 matrix. The approach discards sample points/equations that contribute the least %E2%80%9Cnew%E2%80%9D information to RN%CC%83. Since these points contain the least new information, they are the ones which when discarded are both the easiest to predict and provide maximum improvement of RN%CC%83's conditioning. Prior to this work, handling ill-conditioned correlation matrices was a major, perhaps the principal, unsolved challenge necessary for robust and efficient GEK emulators. Numerical results demonstrate that GEK predictions can be significantly more accurate when GEK is allowed to discard points by the presented method. Numerical results also indicate that GEK can be used to break the curse of dimensionality by exploiting inexpensive derivatives (such as those provided by automatic differentiation or adjoint techniques), smoothness in the response being modeled, and adaptive sampling. Development of a suitable adaptive sampling algorithm was beyond the scope of this work; instead adaptive sampling was approximated by omitting the cost of samples discarded by the presented pivoted Cholesky approach.

  17. Robust Instrumentation[Water treatment for power plant]; Robust Instrumentering

    Energy Technology Data Exchange (ETDEWEB)

    Wik, Anders [Vattenfall Utveckling AB, Stockholm (Sweden)

    2003-08-01

    Cementa Slite Power Station is a heat recovery steam generator (HRSG) with moderate steam data; 3.0 MPa and 420 deg C. The heat is recovered from Cementa, a cement industry, without any usage of auxiliary fuel. The Power station commenced operation in 2001. The layout of the plant is unusual, there are no similar in Sweden and very few world-wide, so the operational experiences are limited. In connection with the commissioning of the power plant a R and D project was identified with the objective to minimise the manpower needed for chemistry management of the plant. The lean chemistry management is based on robust instrumentation and chemical-free water treatment plant. The concept with robust instrumentation consists of the following components; choice of on-line instrumentation with a minimum of O and M and a chemical-free water treatment. The parameters are specific conductivity, cation conductivity, oxygen and pH. In addition to that, two fairly new on-line instruments were included; corrosion monitors and differential pH calculated from specific and cation conductivity. The chemical-free water treatment plant consists of softening, reverse osmosis and electro-deionisation. The operational experience shows that the cycle chemistry is not within the guidelines due to major problems with the operation of the power plant. These problems have made it impossible to reach steady state and thereby not viable to fully verify and validate the concept with robust instrumentation. From readings on the panel of the online analysers some conclusions may be drawn, e.g. the differential pH measurements have fulfilled the expectations. The other on-line analysers have been working satisfactorily apart from contamination with turbine oil, which has been noticed at least twice. The corrosion monitors seem to be working but the lack of trend curves from the mainframe computer system makes it hard to draw any clear conclusions. The chemical-free water treatment has met all

  18. Perspective: Evolution and detection of genetic robustness

    NARCIS (Netherlands)

    Visser, de J.A.G.M.; Hermisson, J.; Wagner, G.P.; Ancel Meyers, L.; Bagheri-Chaichian, H.; Blanchard, J.L.; Chao, L.; Cheverud, J.M.; Elena, S.F.; Fontana, W.; Gibson, G.; Hansen, T.F.; Krakauer, D.; Lewontin, R.C.; Ofria, C.; Rice, S.H.; Dassow, von G.; Wagner, A.; Whitlock, M.C.

    2003-01-01

    Robustness is the invariance of phenotypes in the face of perturbation. The robustness of phenotypes appears at various levels of biological organization, including gene expression, protein folding, metabolic flux, physiological homeostasis, development, and even organismal fitness. The mechanisms

  19. Robust lyapunov controller for uncertain systems

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Elmetennani, Shahrazed

    2017-01-01

    Various examples of systems and methods are provided for Lyapunov control for uncertain systems. In one example, a system includes a process plant and a robust Lyapunov controller configured to control an input of the process plant. The robust

  20. Robust distributed cognitive relay beamforming

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2012-05-01

    In this paper, we present a distributed relay beamformer design for a cognitive radio network in which a cognitive (or secondary) transmit node communicates with a secondary receive node assisted by a set of cognitive non-regenerative relays. The secondary nodes share the spectrum with a licensed primary user (PU) node, and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. The proposed robust cognitive relay beamformer design seeks to minimize the total relay transmit power while ensuring that the transceiver signal-to-interference- plus-noise ratio and PU interference constraints are satisfied. The proposed design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can be reformulated as a tractable convex optimization problem that can be solved efficiently. Numerical results are provided and illustrate the performance of the proposed designs for different network operating conditions and parameters. © 2012 IEEE.

  1. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  2. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Xiaoming Xi

    2013-07-01

    Full Text Available Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT, which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.

  3. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  4. Robust adaptive synchronization of general dynamical networks ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 86; Issue 6. Robust ... A robust adaptive synchronization scheme for these general complex networks with multiple delays and uncertainties is established and raised by employing the robust adaptive control principle and the Lyapunov stability theory. We choose ...

  5. Robust portfolio selection under norm uncertainty

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2016-06-01

    Full Text Available Abstract In this paper, we consider the robust portfolio selection problem which has a data uncertainty described by the ( p , w $(p,w$ -norm in the objective function. We show that the robust formulation of this problem is equivalent to a linear optimization problem. Moreover, we present some numerical results concerning our robust portfolio selection problem.

  6. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  7. Methodology in robust and nonparametric statistics

    CERN Document Server

    Jurecková, Jana; Picek, Jan

    2012-01-01

    Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators

  8. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    Science.gov (United States)

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Robust Model Predictive Control of a Wind Turbine

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...

  10. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  11. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    Science.gov (United States)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  12. Many-objective robust decision making for water allocation under climate change

    NARCIS (Netherlands)

    Yan, Dan; Ludwig, Fulco; Huang, He Qing; Werners, Saskia E.

    2017-01-01

    Water allocation is facing profound challenges due to climate change uncertainties. To identify adaptive water allocation strategies that are robust to climate change uncertainties, a model framework combining many-objective robust decision making and biophysical modeling is developed for large

  13. GFC-Robust Risk Management Under the Basel Accord Using Extreme Value Methodologies

    NARCIS (Netherlands)

    P.A. Santos (Paulo Araújo); J.A. Jiménez-Martín (Juan-Ángel); M.J. McAleer (Michael); T. Pérez-Amaral (Teodosio)

    2011-01-01

    textabstractIn McAleer et al. (2010b), a robust risk management strategy to the Global Financial Crisis (GFC) was proposed under the Basel II Accord by selecting a Value-at-Risk (VaR) forecast that combines the forecasts of different VaR models. The robust forecast was based on the median of the

  14. Prediction of Navigation Satellite Clock Bias Considering Clock's Stochastic Variation Behavior with Robust Least Square Collocation

    Directory of Open Access Journals (Sweden)

    WANG Yupu

    2016-06-01

    Full Text Available In order to better express the characteristic of satellite clock bias (SCB and further improve its prediction precision, a new SCB prediction model is proposed, which can take the physical feature, cyclic variation and stochastic variation behaviors of the space-borne atomic clock into consideration by using a robust least square collocation (LSC method. The proposed model firstly uses a quadratic polynomial model with periodic terms to fit and abstract the trend term and cyclic terms of SCB. Then for the residual stochastic variation part and possible gross errors hidden in SCB data, the model employs a robust LSC method to process them. The covariance function of the LSC is determined by selecting an empirical function and combining SCB prediction tests. Using the final precise IGS SCB products to conduct prediction tests, the results show that the proposed model can get better prediction performance. Specifically, the results' prediction accuracy can enhance 0.457 ns and 0.948 ns respectively, and the corresponding prediction stability can improve 0.445 ns and 1.233 ns, compared with the results of quadratic polynomial model and grey model. In addition, the results also show that the proposed covariance function corresponding to the new model is reasonable.

  15. Robust Real-Time Music Transcription with a Compositional Hierarchical Model.

    Science.gov (United States)

    Pesek, Matevž; Leonardis, Aleš; Marolt, Matija

    2017-01-01

    The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.

  16. ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2017-05-01

    Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.

  17. Representing Objects using Global 3D Relational Features for Recognition Tasks

    DEFF Research Database (Denmark)

    Mustafa, Wail

    2015-01-01

    representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... robust color description, color calibration is performed. The framework was used in three recognition tasks: object instance recognition, object category recognition, and object spatial relationship recognition. For the object instance recognition task, we present a system that utilizes color and scale...

  18. Container Materials, Fabrication And Robustness

    International Nuclear Information System (INIS)

    Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.

    2009-01-01

    The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.

  19. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  20. YamiPred: A novel evolutionary method for predicting pre-miRNAs and selecting relevant features

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Theofilatos, Konstantinos; Likothanassis, Spiros; Mavroudi, Seferina

    2015-01-01

    MicroRNAs (miRNAs) are small non-coding RNAs, which play a significant role in gene regulation. Predicting miRNA genes is a challenging bioinformatics problem and existing experimental and computational methods fail to deal with it effectively. We developed YamiPred, an embedded classification method that combines the efficiency and robustness of Support Vector Machines (SVM) with Genetic Algorithms (GA) for feature selection and parameters optimization. YamiPred was tested in a new and realistic human dataset and was compared with state-of-the-art computational intelligence approaches and the prevalent SVM-based tools for miRNA prediction. Experimental results indicate that YamiPred outperforms existing approaches in terms of accuracy and of geometric mean of sensitivity and specificity. The embedded feature selection component selects a compact feature subset that contributes to the performance optimization. Further experimentation with this minimal feature subset has achieved very high classification performance and revealed the minimum number of samples required for developing a robust predictor. YamiPred also confirmed the important role of commonly used features such as entropy and enthalpy, and uncovered the significance of newly introduced features, such as %A-U aggregate nucleotide frequency and positional entropy. The best model trained on human data has successfully predicted pre-miRNAs to other organisms including the category of viruses.

  1. YamiPred: A novel evolutionary method for predicting pre-miRNAs and selecting relevant features

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2015-01-23

    MicroRNAs (miRNAs) are small non-coding RNAs, which play a significant role in gene regulation. Predicting miRNA genes is a challenging bioinformatics problem and existing experimental and computational methods fail to deal with it effectively. We developed YamiPred, an embedded classification method that combines the efficiency and robustness of Support Vector Machines (SVM) with Genetic Algorithms (GA) for feature selection and parameters optimization. YamiPred was tested in a new and realistic human dataset and was compared with state-of-the-art computational intelligence approaches and the prevalent SVM-based tools for miRNA prediction. Experimental results indicate that YamiPred outperforms existing approaches in terms of accuracy and of geometric mean of sensitivity and specificity. The embedded feature selection component selects a compact feature subset that contributes to the performance optimization. Further experimentation with this minimal feature subset has achieved very high classification performance and revealed the minimum number of samples required for developing a robust predictor. YamiPred also confirmed the important role of commonly used features such as entropy and enthalpy, and uncovered the significance of newly introduced features, such as %A-U aggregate nucleotide frequency and positional entropy. The best model trained on human data has successfully predicted pre-miRNAs to other organisms including the category of viruses.

  2. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  3. Identifying significant environmental features using feature recognition.

    Science.gov (United States)

    2015-10-01

    The Department of Environmental Analysis at the Kentucky Transportation Cabinet has expressed an interest in feature-recognition capability because it may help analysts identify environmentally sensitive features in the landscape, : including those r...

  4. Robust energy harvesting from walking vibrations by means of nonlinear cantilever beams

    Science.gov (United States)

    Kluger, Jocelyn M.; Sapsis, Themistoklis P.; Slocum, Alexander H.

    2015-04-01

    In the present work we examine how mechanical nonlinearity can be appropriately utilized to achieve strong robustness of performance in an energy harvesting setting. More specifically, for energy harvesting applications, a great challenge is the uncertain character of the excitation. The combination of this uncertainty with the narrow range of good performance for linear oscillators creates the need for more robust designs that adapt to a wider range of excitation signals. A typical application of this kind is energy harvesting from walking vibrations. Depending on the particular characteristics of the person that walks as well as on the pace of walking, the excitation signal obtains completely different forms. In the present work we study a nonlinear spring mechanism that is composed of a cantilever wrapping around a curved surface as it deflects. While for the free cantilever, the force acting on the free tip depends linearly on the tip displacement, the utilization of a contact surface with the appropriate distribution of curvature leads to essentially nonlinear dependence between the tip displacement and the acting force. The studied nonlinear mechanism has favorable mechanical properties such as low frictional losses, minimal moving parts, and a rugged design that can withstand excessive loads. Through numerical simulations we illustrate that by utilizing this essentially nonlinear element in a 2 degrees-of-freedom (DOF) system, we obtain strongly nonlinear energy transfers between the modes of the system. We illustrate that this nonlinear behavior is associated with strong robustness over three radically different excitation signals that correspond to different walking paces. To validate the strong robustness properties of the 2DOF nonlinear system, we perform a direct parameter optimization for 1DOF and 2DOF linear systems as well as for a class of 1DOF and 2DOF systems with nonlinear springs similar to that of the cubic spring that are physically realized

  5. Robustness Assessment of Spatial Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2012-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern buildi...... to robustness of spatial timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness of structures and provide strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues with respect...

  6. Uncertainty, robustness, and the value of information in managing a population of northern bobwhites

    Science.gov (United States)

    Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael

    2014-01-01

    The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about

  7. Analysis of robustness of urban bus network

    Science.gov (United States)

    Tao, Ren; Yi-Fan, Wang; Miao-Miao, Liu; Yan-Jie, Xu

    2016-02-01

    In this paper, the invulnerability and cascade failures are discussed for the urban bus network. Firstly, three static models(bus stop network, bus transfer network, and bus line network) are used to analyse the structure and invulnerability of urban bus network in order to understand the features of bus network comprehensively. Secondly, a new way is proposed to study the invulnerability of urban bus network by modelling two layered networks, i.e., the bus stop-line network and the bus line-transfer network and then the interactions between different models are analysed. Finally, by modelling a new layered network which can reflect the dynamic passenger flows, the cascade failures are discussed. Then a new load redistribution method is proposed to study the robustness of dynamic traffic. In this paper, the bus network of Shenyang City which is one of the biggest cities in China, is taken as a simulation example. In addition, some suggestions are given to improve the urban bus network and provide emergency strategies when traffic congestion occurs according to the numerical simulation results. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473073, 61374178, 61104074, and 61203329), the Fundamental Research Funds for the Central Universities (Grant Nos. N130417006, L1517004), and the Program for Liaoning Excellent Talents in University (Grant No. LJQ2014028).

  8. Improving robustness and computational efficiency using modern C++

    International Nuclear Information System (INIS)

    Paterno, M; Kowalkowski, J; Green, C

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  9. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  10. Conceptual Combination During Sentence Comprehension

    Science.gov (United States)

    Swinney, David; Love, Tracy; Walenski, Matthew; Smith, Edward E.

    2008-01-01

    This experiment examined the time course of integration of modifier-noun (conceptual) combinations during auditory sentence comprehension using cross-modal lexical priming. The study revealed that during ongoing comprehension, there is initial activation of features of the noun prior to activation of (emergent) features of the entire conceptual combination. These results support compositionality in conceptual combination; that is, they indicate that features of the individual words constituting a conceptual combination are activated prior to combination of the words into a new concept. PMID:17576278

  11. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  12. Adaptive robust Kalman filtering for precise point positioning

    International Nuclear Information System (INIS)

    Guo, Fei; Zhang, Xiaohong

    2014-01-01

    The optimality of precise point postioning (PPP) solution using a Kalman filter is closely connected to the quality of the a priori information about the process noise and the updated mesurement noise, which are sometimes difficult to obtain. Also, the estimation enviroment in the case of dynamic or kinematic applications is not always fixed but is subject to change. To overcome these problems, an adaptive robust Kalman filtering algorithm, the main feature of which introduces an equivalent covariance matrix to resist the unexpected outliers and an adaptive factor to balance the contribution of observational information and predicted information from the system dynamic model, is applied for PPP processing. The basic models of PPP including the observation model, dynamic model and stochastic model are provided first. Then an adaptive robust Kalmam filter is developed for PPP. Compared with the conventional robust estimator, only the observation with largest standardized residual will be operated by the IGG III function in each iteration to avoid reducing the contribution of the normal observations or even filter divergence. Finally, tests carried out in both static and kinematic modes have confirmed that the adaptive robust Kalman filter outperforms the classic Kalman filter by turning either the equivalent variance matrix or the adaptive factor or both of them. This becomes evident when analyzing the positioning errors in flight tests at the turns due to the target maneuvering and unknown process/measurement noises. (paper)

  13. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  14. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  15. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  16. Robust visual tracking via multiscale deep sparse networks

    Science.gov (United States)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  17. Features of MCNP6

    International Nuclear Information System (INIS)

    Goorley, T.; James, M.; Booth, T.; Brown, F.; Bull, J.; Cox, L.J.; Durkee, J.; Elson, J.; Fensin, M.; Forster, R.A.; Hendricks, J.; Hughes, H.G.; Johns, R.; Kiedrowski, B.; Martz, R.; Mashnik, S.; McKinney, G.; Pelowitz, D.; Prael, R.; Sweezy, J.

    2016-01-01

    Highlights: • MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, but it is much more than the sum of these two computer codes. • MCNP6 is the result of six years of effort by the MCNP5 and MCNPX code development teams. • These groups of people, residing in Los Alamos National Laboratory’s X Computational Physics Division, Monte Carlo Codes Group (XCP-3) and Nuclear Engineering and Nonproliferation Division, Radiation Transport Modeling Team (NEN-5) respectively, have combined their code development efforts to produce the next evolution of MCNP. • While maintenance and major bug fixes will continue for MCNP5 1.60 and MCNPX 2.7.0 for upcoming years, new code development capabilities only will be developed and released in MCNP6. • In fact, the initial release of MCNP6 contains numerous new features not previously found in either code. • These new features are summarized in this document. • Packaged with MCNP6 is also the new production release of the ENDF/B-VII.1 nuclear data files usable by MCNP. • The high quality of the overall merged code, usefulness of these new features, along with the desire in the user community to start using the merged code, have led us to make the first MCNP6 production release: MCNP6 version 1. • High confidence in the MCNP6 code is based on its performance with the verification and validation test suites, comparisons to its predecessor codes, our automated nightly software debugger tests, the underlying high quality nuclear and atomic databases, and significant testing by many beta testers. - Abstract: MCNP6 can be described as the merger of MCNP5 and MCNPX capabilities, but it is much more than the sum of these two computer codes. MCNP6 is the result of six years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in Los Alamos National Laboratory’s X Computational Physics Division, Monte Carlo Codes Group (XCP-3) and Nuclear Engineering and

  18. Robustness-tracking control based on sliding mode and H∞ theory for linear servo system

    Institute of Scientific and Technical Information of China (English)

    TIAN Yan-feng; GUO Qing-ding

    2005-01-01

    A robustness-tracking control scheme based on combining H∞ robust control and sliding mode control is proposed for a direct drive AC permanent-magnet linear motor servo system to solve the conflict between tracking and robustness of the linear servo system. The sliding mode tracking controller is designed to ensure the system has a fast tracking characteristic to the command, and the H∞ robustness controller suppresses the disturbances well within the close loop( including the load and the end effect force of linear motor etc. ) and effectively minimizes the chattering of sliding mode control which influences the steady state performance of the system. Simulation results show that this control scheme enhances the track-command-ability and the robustness of the linear servo system, and in addition, it has a strong robustness to parameter variations and resistance disturbances.

  19. Robust holographic storage system design.

    Science.gov (United States)

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. © 2011 Optical Society of America

  20. Robust Sensing of Approaching Vehicles Relying on Acoustic Cues

    Directory of Open Access Journals (Sweden)

    Mitsunori Mizumachi

    2014-05-01

    Full Text Available The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle.