WorldWideScience

Sample records for wavelet-based segmentation method

  1. On exploiting wavelet bases in statistical region-based segmentation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren

    2002-01-01

    Statistical region-based segmentation methods such as the Active Appearance Models establish dense correspondences by modelling variation of shape and pixel intensities in low-resolution 2D images. Unfortunately, for high-resolution 2D and 3D images, this approach is rendered infeasible due to ex...... 9-7 wavelet on cardiac MRIs and human faces show that the segmentation accuracy is minimally degraded at compression ratios of 1:10 and 1:20, respectively....

  2. WAVELET BASED SEGMENTATION USING OPTIMAL STATISTICAL FEATURES ON BREAST IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sindhuja

    2014-05-01

    Full Text Available Elastography is the emerging imaging modality that analyzes the stiffness of the tissue for detecting and classifying breast tumors. Computer-aided detection speeds up the diagnostic process of breast cancer improving the survival rate. A multiresolution approach using Discrete wavelet transform is employed on real time images, using the low-low (LL, low-high (LH, high-low (HL, and high-high (HH sub-bands of Daubechies family. Features are extracted, selected and then finally segmented by K-means clustering algorithm. The proposed work can be extended to Classification of the tumors.

  3. A wavelet-based method for multispectral face recognition

    Science.gov (United States)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  4. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  5. A Wavelet-Based Optimization Method for Biofuel Production

    Directory of Open Access Journals (Sweden)

    Maurizio Carlini

    2018-02-01

    Full Text Available On a global scale many countries are still heavily dependent on crude oil to produce energy and fuel for transport, with a resulting increase of atmospheric pollution. A possible solution to obviate this problem is to find eco-sustainable energy sources. A potential choice could be the use of biodiesel as fuel. The work presented aims to characterise the transesterification reaction of waste peanut frying oil using colour analysis and wavelet analysis. The biodiesel production, with the complete absence of mucilages, was evaluated through a suitable set of energy wavelet coefficients and scalograms. The physical characteristics of the biodiesel are influenced by mucilages. In particular the viscosity, that is a fundamental parameter for the correct use of the biodiesel, might be compromised. The presence of contaminants in the samples can often be missed by visual analysis. The low and high frequency wavelet analysis, by investigating the energy change of wavelet coefficient, provided a valid characterisation of the quality of the samples, related to the absence of mucilages, which is consistent with the experimental results. The proposed method of this work represents a preliminary analysis, before the subsequent chemical physical analysis, that can be develop during the production phases of the biodiesel in order to optimise the process, avoiding the presence of impurities in suspension in the final product.

  6. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals

    Directory of Open Access Journals (Sweden)

    Suyi Li

    2017-01-01

    Full Text Available The noninvasive peripheral oxygen saturation (SpO2 and the pulse rate can be extracted from photoplethysmography (PPG signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects’ PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.

  7. A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum

    Directory of Open Access Journals (Sweden)

    Pan Liu

    2017-05-01

    Full Text Available This paper presents a wavelet-based Gaussian method (WGM for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF. The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.

  8. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions.

    Science.gov (United States)

    Daqrouq, K; Dobaie, A

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB.

  9. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions

    Directory of Open Access Journals (Sweden)

    K. Daqrouq

    2016-01-01

    Full Text Available An investigation of the electrocardiogram (ECG signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF obtained from the percentage energy (PE of terminal wavelet packet transform (WPT subsignals. In addition, the average framing percentage energy (AFE technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD, logarithmic difference signal ratio (LDSR, and correlation coefficient (CC. The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN environment, where the recognition rate was 81.48% for 5 dB.

  10. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    Science.gov (United States)

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. On Decomposing Object Appearance using PCA and Wavelet bases with Applications to Image Segmentation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren

    2002-01-01

    Generative models capable of synthesising complete object images have over the past few years proven their worth when interpreting images. Due to the recent development of computational machinery it has become feasible to model the variation of image intensities and landmark positions over...... the complete object surface using principal component analysis. This typically involves matrices with a few thousands and up to 100.000+ rows. This paper demonstrates applications of such models applied on colour images of human faces and cardiac magnetic resonance images. Further, we devise methods...

  12. A wavelet-based spatially adaptive method for mammographic contrast enhancement.

    Science.gov (United States)

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2003-03-21

    A method aimed at minimizing image noise while optimizing contrast of image features is presented. The method is generic and it is based on local modification of multiscale gradient magnitude values provided by the redundant dyadic wavelet transform. Denoising is accomplished by a spatially adaptive thresholding strategy, taking into account local signal and noise standard deviation. Noise standard deviation is estimated from the background of the mammogram. Contrast enhancement is accomplished by applying a local linear mapping operator on denoised wavelet magnitude values. The operator normalizes local gradient magnitude maxima to the global maximum of the first scale magnitude subimage. Coefficient mapping is controlled by a local gain limit parameter. The processed image is derived by reconstruction from the modified wavelet coefficients. The method is demonstrated with a simulated image with added Gaussian noise, while an initial quantitative performance evaluation using 22 images from the DDSM database was performed. Enhancement was applied globally to each mammogram, using the same local gain limit value. Quantitative contrast and noise metrics were used to evaluate the quality of processed image regions containing verified lesions. Results suggest that the method offers significantly improved performance over conventional and previously reported global wavelet contrast enhancement methods. The average contrast improvement, noise amplification and contrast-to-noise ratio improvement indices were measured as 9.04, 4.86 and 3.04, respectively. In addition, in a pilot preference study, the proposed method demonstrated the highest ranking, among the methods compared. The method was implemented in C++ and integrated into a medical image visualization tool.

  13. A wavelet-based spatially adaptive method for mammographic contrast enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G [Department of Medical Physics, School of Medicine, University of Patras, Patras 26500 (Greece)

    2003-03-21

    A method aimed at minimizing image noise while optimizing contrast of image features is presented. The method is generic and it is based on local modification of multiscale gradient magnitude values provided by the redundant dyadic wavelet transform. Denoising is accomplished by a spatially adaptive thresholding strategy, taking into account local signal and noise standard deviation. Noise standard deviation is estimated from the background of the mammogram. Contrast enhancement is accomplished by applying a local linear mapping operator on denoised wavelet magnitude values. The operator normalizes local gradient magnitude maxima to the global maximum of the first scale magnitude subimage. Coefficient mapping is controlled by a local gain limit parameter. The processed image is derived by reconstruction from the modified wavelet coefficients. The method is demonstrated with a simulated image with added Gaussian noise, while an initial quantitative performance evaluation using 22 images from the DDSM database was performed. Enhancement was applied globally to each mammogram, using the same local gain limit value. Quantitative contrast and noise metrics were used to evaluate the quality of processed image regions containing verified lesions. Results suggest that the method offers significantly improved performance over conventional and previously reported global wavelet contrast enhancement methods. The average contrast improvement, noise amplification and contrast-to-noise ratio improvement indices were measured as 9.04, 4.86 and 3.04, respectively. In addition, in a pilot preference study, the proposed method demonstrated the highest ranking, among the methods compared. The method was implemented in C++ and integrated into a medical image visualization tool.

  14. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    Directory of Open Access Journals (Sweden)

    Marcos Martin-Fernandez

    2015-01-01

    Full Text Available Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak’s, Donoho-Johnstone’s, Awate-Whitaker’s, and nonlocal means filters, in different 2D and 3D images.

  15. Gaussian wavelet based dynamic filtering (GWDF) method for medical ultrasound systems.

    Science.gov (United States)

    Wang, Peidong; Shen, Yi; Wang, Qiang

    2007-05-01

    In this paper, a novel dynamic filtering method using Gaussian wavelet filters is proposed to remove noise from ultrasound echo signal. In the proposed method, a mother wavelet is first selected with its central frequency (CF) and frequency bandwidth (FB) equal to those of the transmitted signal. The actual frequency of the received signal at a given depth is estimated through the autocorrelation technique. Then the mother wavelet is dilated using the ratio between the transmitted central frequency and the actual frequency as the scale factor. The generated daughter wavelet is finally used as the dynamic filter at this depth. Frequency-demodulated Gaussian wavelet is chosen in this paper because its power spectrum is well-matched with that of the transmitted ultrasound signal. The proposed method is evaluated by simulations using Field II program. Experiments are also conducted out on a standard ultrasound phantom using a 192-element transducer with the center frequency of 5 MHz. The phantom contains five point targets, five circular high scattering regions with diameters of 2, 3, 4, 5, 6 mm respectively, and five cysts with diameters of 6, 5, 4, 3, 2 mm respectively. Both simulation and experimental results show that optimal signal-to-noise ratio (SNR) can be obtained and useful information can be extracted along the depth direction irrespective of the diagnostic objects.

  16. A wavelet based numerical simulation technique for the two-phase flow using the phase field method

    CERN Document Server

    Alam, Jahrul M

    2016-01-01

    In multiphase flow phenomena, bubbles and droplets are advected, deformed, break up into smaller ones, and coalesce with each other. A primary challenge of classical computational fluid dynamics (CFD) methods for such flows is to effectively describe a transition zone between phases across which physical properties vary steeply but continuously. Based on the van der Waals theory, Allen-Cahn phase field method describes the face-to-face existence of two fluids with a free-energy functional of mass density or molar concentration, without imposing topological constraints on interface as phase boundary. In this article, a CFD simulation methodology is described by solving the Allen-Cahn-Navier-Stokes equations using a wavelet collocation method. The second order temporal accuracy is verified by simulating a moving sharp interface. The average terminal velocity of a rising gas bubble in a liquid that is computed by the present method has agreed with that computed by a laboratory experiment. The calculation of the ...

  17. Element analysis: a wavelet-based method for analysing time-localized events in noisy time series

    Science.gov (United States)

    Lilly, Jonathan M.

    2017-04-01

    A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized `events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's `region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis, is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.

  18. Phase-coherence classification: A new wavelet-based method to separate local field potentials into local (in)coherent and volume-conducted components.

    Science.gov (United States)

    von Papen, M; Dafsari, H; Florin, E; Gerick, F; Timmermann, L; Saur, J

    2017-11-01

    Local field potentials (LFP) reflect the integrated electrophysiological activity of large neuron populations and may thus reflect the dynamics of spatially and functionally different networks. We introduce the wavelet-based phase-coherence classification (PCC), which separates LFP into volume-conducted, local incoherent and local coherent components. It allows to compute power spectral densities for each component associated with local or remote electrophysiological activity. We use synthetic time series to estimate optimal parameters for the application to LFP from within the subthalamic nucleus of eight Parkinson patients. With PCC we identify multiple local tremor clusters and quantify the relative power of local and volume-conducted components. We analyze the electrophysiological response to an apomorphine injection during rest and hold. Here we show medication-induced significant decrease of incoherent activity in the low beta band and increase of coherent activity in the high beta band. On medication significant movement-induced changes occur in the high beta band of the local coherent signal. It increases during isometric hold tasks and decreases during phasic wrist movement. The power spectra of local PCC components is compared to bipolar recordings. In contrast to bipolar recordings PCC can distinguish local incoherent and coherent signals. We further compare our results with classification based on the imaginary part of coherency and the weighted phase lag index. The low and high beta band are more susceptible to medication- and movement-related changes reflected by incoherent and local coherent activity, respectively. PCC components may thus reflect functionally different networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Introduction to wavelet-based compression of medical images.

    Science.gov (United States)

    Schomer, D F; Elekes, A A; Hazle, J D; Huffman, J C; Thompson, S K; Chui, C K; Murphy, W A

    1998-01-01

    Medical image compression can significantly enhance the performance of picture archiving and communication systems and may be considered an enabling technology for telemedicine. The wavelet transform is a powerful mathematical tool with many unique qualities that are useful for image compression and processing applications. Although wavelet concepts can be traced back to 1910, the mathematics of wavelets have only recently been formalized. By exploiting spatial and spectral information redundancy in images, wavelet-based methods offer significantly better results for compressing medical images than do compression algorithms based on Fourier methods, such as the discrete cosine transform used by the Joint Photographic Experts Group. Furthermore, wavelet-based compression does not suffer from blocking artifacts, and the restored image quality is generally superior at higher compression rates.

  20. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  1. Wavelet-based tracking of bacteria in unreconstructed off-axis holograms.

    Science.gov (United States)

    Marin, Zach; Wallace, J Kent; Nadeau, Jay; Khalil, Andre

    2017-09-13

    We propose an automated wavelet-based method of tracking particles in unreconstructed off-axis holograms to provide rough estimates of the presence of motion and particle trajectories in digital holographic microscopy (DHM) time series. The wavelet transform modulus maxima segmentation method is adapted and tailored to extract Airy-like diffraction disks, which represent bacteria, from DHM time series. In this exploratory analysis, the method shows potential for estimating bacterial tracks in low-particle-density time series, based on a preliminary analysis of both living and dead Serratia marcescens, and for rapidly providing a single-bit answer to whether a sample chamber contains living or dead microbes or is empty. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Enhanced ATM Security using Biometric Authentication and Wavelet Based AES

    Directory of Open Access Journals (Sweden)

    Sreedharan Ajish

    2016-01-01

    Full Text Available The traditional ATM terminal customer recognition systems rely only on bank cards, passwords and such identity verification methods are not perfect and functions are too single. Biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. This paper presents a highly secured ATM banking system using biometric authentication and wavelet based Advanced Encryption Standard (AES algorithm. Two levels of security are provided in this proposed design. Firstly we consider the security level at the client side by providing biometric authentication scheme along with a password of 4-digit long. Biometric authentication is achieved by considering the fingerprint image of the client. Secondly we ensure a secured communication link between the client machine to the bank server using an optimized energy efficient and wavelet based AES processor. The fingerprint image is the data for encryption process and 4-digit long password is the symmetric key for the encryption process. The performance of ATM machine depends on ultra-high-speed encryption, very low power consumption, and algorithmic integrity. To get a low power consuming and ultra-high speed encryption at the ATM machine, an optimized and wavelet based AES algorithm is proposed. In this system biometric and cryptography techniques are used together for personal identity authentication to improve the security level. The design of the wavelet based AES processor is simulated and the design of the energy efficient AES processor is simulated in Quartus-II software. Simulation results ensure its proper functionality. A comparison among other research works proves its superiority.

  3. An interactive segmentation method based on superpixel

    DEFF Research Database (Denmark)

    Yang, Shu; Zhu, Yaping; Wu, Xiaoyu

    2015-01-01

    This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....

  4. Wavelet-based associative memory

    Science.gov (United States)

    Jones, Katharine J.

    2004-04-01

    Faces provide important characteristics of a person"s identification. In security checks, face recognition still remains the method in continuous use despite other approaches (i.e. fingerprints, voice recognition, pupil contraction, DNA scanners). With an associative memory, the output data is recalled directly using the input data. This can be achieved with a Nonlinear Holographic Associative Memory (NHAM). This approach can also distinguish between strongly correlated images and images that are partially or totally enclosed by others. Adaptive wavelet lifting has been used for Content-Based Image Retrieval. In this paper, adaptive wavelet lifting will be applied to face recognition to achieve an associative memory.

  5. A wavelet-based Projector Augmented-Wave (PAW) method: reaching frozen-core all-electron precision with a systematic, adaptive and localized wavelet basis set

    CERN Document Server

    Rangel, Tonatiuh; Genovese, Luigi; Torrent, Marc

    2016-01-01

    We present a Projector Augmented-Wave~(PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order-N simulations within a PAW method.

  6. Wavelet Based Analytical Expressions to Steady State Biofilm Model Arising in Biochemical Engineering.

    Science.gov (United States)

    Padma, S; Hariharan, G

    2016-06-01

    In this paper, we have developed an efficient wavelet based approximation method to biofilm model under steady state arising in enzyme kinetics. Chebyshev wavelet based approximation method is successfully introduced in solving nonlinear steady state biofilm reaction model. To the best of our knowledge, until now there is no rigorous wavelet based solution has been addressed for the proposed model. Analytical solutions for substrate concentration have been derived for all values of the parameters δ and SL. The power of the manageable method is confirmed. Some numerical examples are presented to demonstrate the validity and applicability of the wavelet method. Moreover the use of Chebyshev wavelets is found to be simple, efficient, flexible, convenient, small computation costs and computationally attractive.

  7. The study of evolution and depression of the alpha-rhythm in the human brain EEG by means of wavelet-based methods

    Science.gov (United States)

    Runnova, A. E.; Zhuravlev, M. O.; Khramova, M. V.; Pysarchik, A. N.

    2017-04-01

    We study the appearance, development and depression of the alpha-rhythm in human EEG data during a psychophysiological experiment by stimulating cognitive activity with the perception of ambiguous object. The new method based on continuous wavelet transform allows to estimate the energy contribution of various components, including the alpha rhythm, in the general dynamics of the electrical activity of the projections of various areas of the brain. The decision-making process by observe ambiguous images is characterized by specific oscillatory alfa-rhytm patterns in the multi-channel EEG data. We have shown the repeatability of detected principles of the alpha-rhythm evolution in a data of group of 12 healthy male volunteers.

  8. Using Wavelet Bases to Separate Scales in Quantum Field Theory

    Science.gov (United States)

    Michlin, Tracie L.

    This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.

  9. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Liao, T. W. [Louisiana State University; Ting, C.F. [Louisiana State University; Qu, Jun [ORNL; Blau, Peter Julian [ORNL

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  10. Multiphase Image Segmentation Using the Deformable Simplicial Complex Method

    DEFF Research Database (Denmark)

    Dahl, Vedrana Andersen; Christiansen, Asger Nyman; Bærentzen, Jakob Andreas

    2014-01-01

    in image segmentation based on deformable models. We show the benefits of using the deformable simplicial complex method for image segmentation by segmenting an image into a known number of segments characterized by distinct mean pixel intensities....

  11. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  12. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  13. An automated method for accurate vessel segmentation

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  14. A Class of Wavelet-Based Rayleigh-Euler Beam Element for Analyzing Rotating Shafts

    Directory of Open Access Journals (Sweden)

    Jiawei Xiang

    2011-01-01

    Full Text Available A class of wavelet-based Rayleigh-Euler rotating beam element using B-spline wavelets on the interval (BSWI is developed to analyze rotor-bearing system. The effects of translational and rotary inertia, torsion moment, axial displacement, cross-coupled stiffness and damping coefficients of bearings, hysteric and viscous internal damping, gyroscopic moments and bending deformation of the system are included in the computational model. In order to get a generalized formulation of wavelet-based element, each boundary node is collocated six degrees of freedom (DOFs: three translations and three rotations; whereas, each inner node has only three translations. Typical numerical examples are presented to show the accuracy and efficiency of the presented method.

  15. Method of manufacturing a large-area segmented photovoltaic module

    Science.gov (United States)

    Lenox, Carl

    2013-11-05

    One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

  16. Application of wavelet-based multiple linear regression model to rainfall forecasting in Australia

    Science.gov (United States)

    He, X.; Guan, H.; Zhang, X.; Simmons, C.

    2013-12-01

    In this study, a wavelet-based multiple linear regression model is applied to forecast monthly rainfall in Australia by using monthly historical rainfall data and climate indices as inputs. The wavelet-based model is constructed by incorporating the multi-resolution analysis (MRA) with the discrete wavelet transform and multiple linear regression (MLR) model. The standardized monthly rainfall anomaly and large-scale climate index time series are decomposed using MRA into a certain number of component subseries at different temporal scales. The hierarchical lag relationship between the rainfall anomaly and each potential predictor is identified by cross correlation analysis with a lag time of at least one month at different temporal scales. The components of predictor variables with known lag times are then screened with a stepwise linear regression algorithm to be selectively included into the final forecast model. The MRA-based rainfall forecasting method is examined with 255 stations over Australia, and compared to the traditional multiple linear regression model based on the original time series. The models are trained with data from the 1959-1995 period and then tested in the 1996-2008 period for each station. The performance is compared with observed rainfall values, and evaluated by common statistics of relative absolute error and correlation coefficient. The results show that the wavelet-based regression model provides considerably more accurate monthly rainfall forecasts for all of the selected stations over Australia than the traditional regression model.

  17. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  18. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  19. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    Science.gov (United States)

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms.

  20. Serial identification of EEG patterns using adaptive wavelet-based analysis

    Science.gov (United States)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  1. Accuracy of a wavelet-based fall detection approach using an accelerometer and a barometric pressure sensor.

    Science.gov (United States)

    Ejupi, Andreas; Galang, Chantel; Aziz, Omar; Park, Edward J; Robinovitch, Stephen

    2017-07-01

    Falls are a major source of morbidity in older adults, and 50% of older adults who fall cannot rise independently after falling. Wearable sensor-based fall detection devices may assist in preventing long lies after falls. The goal of this study was to determine the accuracy of a novel wavelet-based approach to automatically detect falls based on accelerometer and barometric pressure sensor data. Participants (n=15) mimicked a range of falls, near falls, and activities of daily living (ADLs) while wearing accelerometer and barometric pressure sensors on the lower back, chest, wrists and thighs. The wavelet transform using pattern adapted wavelets was applied to detect falls from the sensor data. In total, 525 trials (194 falls, 105 near-falls and 226 ADLs) were included in our analysis. When we applied the wavelet-based method on only accelerometer data, classification accuracies ranged from 82% to 96%, with the chest sensor providing the highest accuracy. Accuracy improved by 3.4% on average (p=0.041; SD=3.0%) when we also included the barometric pressure sensor data. The highest classification accuracies (of 98%) were achieved when we combined wavelet-based features and traditional statistical features in a multiphase fall detection model using machine learning. We show that the wavelet-based approach accurately distinguishes falls from near-falls and ADLs, and that it can be applied on wearable sensor data generated from various body locations. Additionally, we show that the accuracy of a wavelet-based fall detection system can be further improved by combining accelerometer and barometric pressure sensor data, and by incorporating wavelet and statistical features in a machine learning classification algorithm.

  2. Wavelet-based detection of transcriptional activity on a novel Staphylococcus aureus tiling microarray

    Directory of Open Access Journals (Sweden)

    Segura Víctor

    2012-09-01

    Full Text Available Abstract Background High-density oligonucleotide microarray is an appropriate technology for genomic analysis, and is particulary useful in the generation of transcriptional maps, ChIP-on-chip studies and re-sequencing of the genome.Transcriptome analysis of tiling microarray data facilitates the discovery of novel transcripts and the assessment of differential expression in diverse experimental conditions. Although new technologies such as next-generation sequencing have appeared, microarrays might still be useful for the study of small genomes or for the analysis of genomic regions with custom microarrays due to their lower price and good accuracy in expression quantification. Results Here, we propose a novel wavelet-based method, named ZCL (zero-crossing lines, for the combined denoising and segmentation of tiling signals. The denoising is performed with the classical SUREshrink method and the detection of transcriptionally active regions is based on the computation of the Continuous Wavelet Transform (CWT. In particular, the detection of the transitions is implemented as the thresholding of the zero-crossing lines. The algorithm described has been applied to the public Saccharomyces cerevisiae dataset and it has been compared with two well-known algorithms: pseudo-median sliding window (PMSW and the structural change model (SCM. As a proof-of-principle, we applied the ZCL algorithm to the analysis of the custom tiling microarray hybridization results of a S. aureus mutant deficient in the sigma B transcription factor. The challenge was to identify those transcripts whose expression decreases in the absence of sigma B. Conclusions The proposed method archives the best performance in terms of positive predictive value (PPV while its sensitivity is similar to the other algorithms used for the comparison. The computation time needed to process the transcriptional signals is low as compared with model-based methods and in the same range to those

  3. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  4. Analysing the Methods of Dzongkha Word Segmentation

    OpenAIRE

    Dhungyel Parshu Ram; Grundspeņķis Jānis

    2017-01-01

    In both Chinese and Dzongkha languages, the greatest challenge is to identify the word boundaries because there are no word delimiters as it is in English and other Western languages. Therefore, preprocessing and word segmentation is the first step in Dzongkha language processing, such as translation, spell-checking, and information retrieval. Research on Chinese word segmentation was conducted long time ago. Therefore, it is relatively mature, but the Dzongkha word segmentation has been less...

  5. Analysing the Methods of Dzongkha Word Segmentation

    Directory of Open Access Journals (Sweden)

    Dhungyel Parshu Ram

    2017-05-01

    Full Text Available In both Chinese and Dzongkha languages, the greatest challenge is to identify the word boundaries because there are no word delimiters as it is in English and other Western languages. Therefore, preprocessing and word segmentation is the first step in Dzongkha language processing, such as translation, spell-checking, and information retrieval. Research on Chinese word segmentation was conducted long time ago. Therefore, it is relatively mature, but the Dzongkha word segmentation has been less studied by researchers. In the paper, we have investigated this major problem in Dzongkha language processing using a probabilistic approach for selecting valid segments with probability being computed on the basis of the corpus.

  6. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  7. Fusion Segmentation Method Based on Fuzzy Theory for Color Images

    Science.gov (United States)

    Zhao, J.; Huang, G.; Zhang, J.

    2017-09-01

    The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  8. Perturbed segmented domain collocation Tau-method for the ...

    African Journals Online (AJOL)

    This paper concerns the numerical solution of second order boundary value problems using a Perturbed segmented domain collocation-Tau method. The entire interval for which the problem is defined is partitioned into two segments and the solution technique is demonstrated on each of the segments. The Chebyshev ...

  9. Wavelet-based sparse functional linear model with applications to EEGs seizure detection and epilepsy diagnosis.

    Science.gov (United States)

    Xie, Shengkun; Krishnan, Sridhar

    2013-02-01

    In epilepsy diagnosis or epileptic seizure detection, much effort has been focused on finding effective combination of feature extraction and classification methods. In this paper, we develop a wavelet-based sparse functional linear model for representation of EEG signals. The aim of this modeling approach is to capture discriminative random components of EEG signals using wavelet variances. To achieve this goal, a forward search algorithm is proposed for determination of an appropriate wavelet decomposition level. Two EEG databases from University of Bonn and University of Freiburg are used for illustration of applicability of the proposed method to both epilepsy diagnosis and epileptic seizure detection problems. For this data considered, we show that wavelet-based sparse functional linear model with a simple classifier such as 1-NN classification method leads to higher classification results than those obtained using other complicated methods such as support vector machine. This approach produces a 100% classification accuracy for various classification tasks using the EEG database from University of Bonn, and outperforms many other state-of-the-art techniques. The proposed classification scheme leads to 99% overall classification accuracy for the EEG data from University of Freiburg.

  10. A comparative study on medical image segmentation methods

    Directory of Open Access Journals (Sweden)

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  11. Segmentation of MRI Volume Data Based on Clustering Method

    Directory of Open Access Journals (Sweden)

    Ji Dongsheng

    2016-01-01

    Full Text Available Here we analyze the difficulties of segmentation without tag line of left ventricle MR images, and propose an algorithm for automatic segmentation of left ventricle (LV internal and external profiles. Herein, we propose an Incomplete K-means and Category Optimization (IKCO method. Initially, using Hough transformation to automatically locate initial contour of the LV, the algorithm uses a simple approach to complete data subsampling and initial center determination. Next, according to the clustering rules, the proposed algorithm finishes MR image segmentation. Finally, the algorithm uses a category optimization method to improve segmentation results. Experiments show that the algorithm provides good segmentation results.

  12. Evaluation of segmentation methods on head and neck CT: Auto-segmentation challenge 2015.

    Science.gov (United States)

    Raudaschl, Patrik F; Zaffino, Paolo; Sharp, Gregory C; Spadea, Maria Francesca; Chen, Antong; Dawant, Benoit M; Albrecht, Thomas; Gass, Tobias; Langguth, Christoph; Lüthi, Marcel; Jung, Florian; Knapp, Oliver; Wesarg, Stefan; Mannion-Haworth, Richard; Bowes, Mike; Ashman, Annaliese; Guillard, Gwenael; Brett, Alan; Vincent, Graham; Orbes-Arteaga, Mauricio; Cárdenas-Peña, David; Castellanos-Dominguez, German; Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris; Hannaford, Blake; Schubert, Rainer; Fritscher, Karl D

    2017-05-01

    Automated delineation of structures and organs is a key step in medical imaging. However, due to the large number and diversity of structures and the large variety of segmentation algorithms, a consensus is lacking as to which automated segmentation method works best for certain applications. Segmentation challenges are a good approach for unbiased evaluation and comparison of segmentation algorithms. In this work, we describe and present the results of the Head and Neck Auto-Segmentation Challenge 2015, a satellite event at the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2015 conference. Six teams participated in a challenge to segment nine structures in the head and neck region of CT images: brainstem, mandible, chiasm, bilateral optic nerves, bilateral parotid glands, and bilateral submandibular glands. This paper presents the quantitative results of this challenge using multiple established error metrics and a well-defined ranking system. The strengths and weaknesses of the different auto-segmentation approaches are analyzed and discussed. The Head and Neck Auto-Segmentation Challenge 2015 was a good opportunity to assess the current state-of-the-art in segmentation of organs at risk for radiotherapy treatment. Participating teams had the possibility to compare their approaches to other methods under unbiased and standardized circumstances. The results demonstrate a clear tendency toward more general purpose and fewer structure-specific segmentation algorithms. © 2017 American Association of Physicists in Medicine.

  13. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    Science.gov (United States)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  14. MUSCLE MRI SEGMENTATION USING RANDOM WALKER METHOD

    Directory of Open Access Journals (Sweden)

    A. V. Shukelovich

    2013-01-01

    Full Text Available A technique of marker set construction for muscle MRI segmentation using random walker approach is introduced. The possibility of clinician’s manual labor amount reduction and random walker algorithm optimization is studied.

  15. Metric conjoint segmentation methods : A Monte Carlo comparison

    NARCIS (Netherlands)

    Vriens, M; Wedel, M; Wilms, T

    The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared

  16. Wavelet-based moment invariants for pattern recognition

    Science.gov (United States)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  17. Wavelet-based multifractal analysis of laser biopsy imagery

    CERN Document Server

    Jagtap, Jaidip; Panigrahi, Prasanta K; Pradhan, Asima

    2011-01-01

    In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during di...

  18. A Wavelet-Based Approach to Fall Detection

    Directory of Open Access Journals (Sweden)

    Luca Palmerini

    2015-05-01

    Full Text Available Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases in order to improve the performance of fall detection algorithms.

  19. EXPLORING WEAK AND OVERLAPPED RETURNS OF A LIDAR WAVEFORM WITH A WAVELET-BASED ECHO DETECTOR

    Directory of Open Access Journals (Sweden)

    C. K. Wang

    2012-08-01

    Full Text Available Full waveform data recording the reflected laser signal from ground objects have been provided by some commercial airborne LIDAR systems in the last few years. Waveform data enable users to explore more information and characteristics of the earth surface than conventional LIDAR point cloud. An important application is to extract extra point clouds from waveform data in addition to the point cloud generated by the online process of echo detection. Some difficult-to-detect points, which may be important to topographic mapping, can be rediscovered from waveform data. The motivation of this study is to explore weak and overlapped returns of a waveform. This paper presents a wavelet-based echo detection algorithm, which is compared with the zero-crossing detection method for evaluation. Some simulated waveforms deteriorated with different noises are made to test the limitations of the detector. The experimental results show that the wavelet-based detector outperformed the zero-crossing detector in both difficult-to-detect cases. The detector is also applied to a real waveform dataset. In addition to the total number of echoes provided by the instrument, the detector found 18% more of echoes. The proposed detector is significant in finding weak and overlapped returns from waveforms.

  20. Incorporation of wavelet-based denoising in iterative deconvolution for partial volume correction in whole-body PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Boussion, N.; Cheze Le Rest, C.; Hatt, M.; Visvikis, D. [INSERM, U650, Laboratoire de Traitement de l' Information Medicale (LaTIM) CHU MORVAN, Brest (France)

    2009-07-15

    Partial volume effects (PVEs) are consequences of the limited resolution of emission tomography. The aim of the present study was to compare two new voxel-wise PVE correction algorithms based on deconvolution and wavelet-based denoising. Deconvolution was performed using the Lucy-Richardson and the Van-Cittert algorithms. Both of these methods were tested using simulated and real FDG PET images. Wavelet-based denoising was incorporated into the process in order to eliminate the noise observed in classical deconvolution methods. Both deconvolution approaches led to significant intensity recovery, but the Van-Cittert algorithm provided images of inferior qualitative appearance. Furthermore, this method added massive levels of noise, even with the associated use of wavelet-denoising. On the other hand, the Lucy-Richardson algorithm combined with the same denoising process gave the best compromise between intensity recovery, noise attenuation and qualitative aspect of the images. The appropriate combination of deconvolution and wavelet-based denoising is an efficient method for reducing PVEs in emission tomography. (orig.)

  1. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    Energy Technology Data Exchange (ETDEWEB)

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  2. A comparison of accurate automatic hippocampal segmentation methods.

    Science.gov (United States)

    Zandifar, Azar; Fonov, Vladimir; Coupé, Pierrick; Pruessner, Jens; Collins, D Louis

    2017-07-15

    The hippocampus is one of the first brain structures affected by Alzheimer's disease (AD). While many automatic methods for hippocampal segmentation exist, few studies have compared them on the same data. In this study, we compare four fully automated hippocampal segmentation methods in terms of their conformity with manual segmentation and their ability to be used as an AD biomarker in clinical settings. We also apply error correction to the four automatic segmentation methods, and complete a comprehensive validation to investigate differences between the methods. The effect size and classification performance is measured for AD versus normal control (NC) groups and for stable mild cognitive impairment (sMCI) versus progressive mild cognitive impairment (pMCI) groups. Our study shows that the nonlinear patch-based segmentation method with error correction is the most accurate automatic segmentation method and yields the most conformity with manual segmentation (κ=0.894). The largest effect size between AD versus NC and sMCI versus pMCI is produced by FreeSurfer with error correction. We further show that, using only hippocampal volume, age, and sex as features, the area under the receiver operating characteristic curve reaches up to 0.8813 for AD versus NC and 0.6451 for sMCI versus pMCI. However, the automatic segmentation methods are not significantly different in their performance. Copyright © 2017. Published by Elsevier Inc.

  3. Wavelet-based EEG processing for computer-aided seizure detection and epilepsy diagnosis

    Directory of Open Access Journals (Sweden)

    Krishnaveni

    2017-01-01

    Full Text Available Many Neurological disorders are very difficult to detect. One such Neurological disorder which we are going to discuss in this paper is Epilepsy. Epilepsy means sudden change in the behavior of a human being for a short period of time. This is caused due to seizures in the brain. Many researches are going onto detect epilepsy detection through analyzing EEG. One such method of epilepsy detection is proposed in this paper. This technique employs Discrete Wave Transform (DWT method for pre-processing, Approximate Entropy (ApEn to extract features and Artificial Neural Network (ANN for classification. This paper presented a detailed survey of various methods that are being used for epilepsy detection and also proposes a wavelet based epilepsy detection method.

  4. Segmentation of myocardium from cardiac MR images using a novel dynamic programming based segmentation method.

    Science.gov (United States)

    Qian, Xiaohua; Lin, Yuan; Zhao, Yue; Wang, Jing; Liu, Jing; Zhuang, Xiahai

    2015-03-01

    Myocardium segmentation in cardiac magnetic resonance (MR) images plays a vital role in clinical diagnosis of the cardiovascular diseases. Because of the low contrast and large variation in intensity and shapes, myocardium segmentation has been a challenging task. A dynamic programming (DP) based segmentation method, incorporating the likelihood and shape information of the myocardium, is developed for segmenting myocardium in cardiac MR images. The endocardium, i.e., the left ventricle blood cavity, is segmented for initialization, and then the optimal epicardium contour is determined using the polar-transformed image and DP scheme. In the DP segmentation scheme, three techniques are proposed to improve the segmentation performance: (1) the likelihood image of the myocardium is constructed to define the external cost in the DP, thus the cost function incorporates prior probability estimation; (2) the adaptive search range is introduced to determine the polar-transformed image, thereby excluding irrelevant tissues; (3) the connectivity constrained DP algorithm is developed to obtain an optimal closed contour. Four metrics, including the Dice metric (Dice), root mean squared error (RMSE), reliability, and correlation coefficient, are used to assess the segmentation accuracy. The authors evaluated the performance of the proposed method on a private dataset and the MICCAI 2009 challenge dataset. The authors also explored the effects of the three new techniques of the DP scheme in the proposed method. For the qualitative evaluation, the segmentation results of the proposed method were clinically acceptable. For the quantitative evaluation, the mean (Dice) for the endocardium and epicardium was 0.892 and 0.927, respectively; the mean RMSE was 2.30 mm for the endocardium and 2.39 mm for the epicardium. In addition, the three new techniques in the proposed DP scheme, i.e., the likelihood image of the myocardium, the adaptive search range, and the connectivity constrained

  5. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek

    2016-04-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  6. Wavelet-Based MPNLMS Adaptive Algorithm for Network Echo Cancellation

    Directory of Open Access Journals (Sweden)

    Hongyang Deng

    2007-03-01

    Full Text Available The μ-law proportionate normalized least mean square (MPNLMS algorithm has been proposed recently to solve the slow convergence problem of the proportionate normalized least mean square (PNLMS algorithm after its initial fast converging period. But for the color input, it may become slow in the case of the big eigenvalue spread of the input signal's autocorrelation matrix. In this paper, we use the wavelet transform to whiten the input signal. Due to the good time-frequency localization property of the wavelet transform, a sparse impulse response in the time domain is also sparse in the wavelet domain. By applying the MPNLMS technique in the wavelet domain, fast convergence for the color input is observed. Furthermore, we show that some nonsparse impulse responses may become sparse in the wavelet domain. This motivates the usage of the wavelet-based MPNLMS algorithm. Advantages of this approach are documented.

  7. Wavelet-Based MPNLMS Adaptive Algorithm for Network Echo Cancellation

    Directory of Open Access Journals (Sweden)

    Doroslovački Miloš

    2007-01-01

    Full Text Available The μ-law proportionate normalized least mean square (MPNLMS algorithm has been proposed recently to solve the slow convergence problem of the proportionate normalized least mean square (PNLMS algorithm after its initial fast converging period. But for the color input, it may become slow in the case of the big eigenvalue spread of the input signal's autocorrelation matrix. In this paper, we use the wavelet transform to whiten the input signal. Due to the good time-frequency localization property of the wavelet transform, a sparse impulse response in the time domain is also sparse in the wavelet domain. By applying the MPNLMS technique in the wavelet domain, fast convergence for the color input is observed. Furthermore, we show that some nonsparse impulse responses may become sparse in the wavelet domain. This motivates the usage of the wavelet-based MPNLMS algorithm. Advantages of this approach are documented.

  8. From cardinal spline wavelet bases to highly coherent dictionaries

    Energy Technology Data Exchange (ETDEWEB)

    Andrle, Miroslav; Rebollo-Neira, Laura [Aston University, Birmingham B4 7ET (United Kingdom)

    2008-05-02

    Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)

  9. Research on the Sparse Representation for Gearbox Compound Fault Features Using Wavelet Bases

    Directory of Open Access Journals (Sweden)

    Chunyan Luo

    2015-01-01

    Full Text Available The research on gearbox fault diagnosis has been gaining increasing attention in recent years, especially on single fault diagnosis. In engineering practices, there is always more than one fault in the gearbox, which is demonstrated as compound fault. Hence, it is equally important for gearbox compound fault diagnosis. Both bearing and gear faults in the gearbox tend to result in different kinds of transient impulse responses in the captured signal and thus it is necessary to propose a potential approach for compound fault diagnosis. Sparse representation is one of the effective methods for feature extraction from strong background noise. Therefore, sparse representation under wavelet bases for compound fault features extraction is developed in this paper. With the proposed method, the different transient features of both bearing and gear can be separated and extracted. Both the simulated study and the practical application in the gearbox with compound fault verify the effectiveness of the proposed method.

  10. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  11. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis.

    Science.gov (United States)

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development.

  12. Improving level set method for fast auroral oval segmentation.

    Science.gov (United States)

    Yang, Xi; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2014-07-01

    Auroral oval segmentation from ultraviolet imager images is of significance in the field of spatial physics. Compared with various existing image segmentation methods, level set is a promising auroral oval segmentation method with satisfactory precision. However, the traditional level set methods are time consuming, which is not suitable for the processing of large aurora image database. For this purpose, an improving level set method is proposed for fast auroral oval segmentation. The proposed algorithm combines four strategies to solve the four problems leading to the high-time complexity. The first two strategies, including our shape knowledge-based initial evolving curve and neighbor embedded level set formulation, can not only accelerate the segmentation process but also improve the segmentation accuracy. And then, the latter two strategies, including the universal lattice Boltzmann method and sparse field method, can further reduce the time cost with an unlimited time step and narrow band computation. Experimental results illustrate that the proposed algorithm achieves satisfactory performance for auroral oval segmentation within a very short processing time.

  13. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  14. Wavelet based methods for improved wind profiler signal processing

    Directory of Open Access Journals (Sweden)

    V. Lehmann

    2001-08-01

    Full Text Available In this paper, we apply wavelet thresholding for removing automatically ground and intermittent clutter (airplane echoes from wind profiler radar data. Using the concept of discrete multi-resolution analysis and non-parametric estimation theory, we develop wavelet domain thresholding rules, which allow us to identify the coefficients relevant for clutter and to suppress them in order to obtain filtered reconstructions.Key words. Meteorology and atmospheric dynamics (instruments and techniques – Radio science (remote sensing; signal processing

  15. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    Science.gov (United States)

    Despotović, Ivana

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121

  16. Passive microrheology of soft materials with atomic force microscopy: A wavelet-based spectral analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Torres, C.; Streppa, L. [CNRS, UMR5672, Laboratoire de Physique, Ecole Normale Supérieure de Lyon, 46 Allée d' Italie, Université de Lyon, 69007 Lyon (France); Arneodo, A.; Argoul, F. [CNRS, UMR5672, Laboratoire de Physique, Ecole Normale Supérieure de Lyon, 46 Allée d' Italie, Université de Lyon, 69007 Lyon (France); CNRS, UMR5798, Laboratoire Ondes et Matière d' Aquitaine, Université de Bordeaux, 351 Cours de la Libération, 33405 Talence (France); Argoul, P. [Université Paris-Est, Ecole des Ponts ParisTech, SDOA, MAST, IFSTTAR, 14-20 Bd Newton, Cité Descartes, 77420 Champs sur Marne (France)

    2016-01-18

    Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale method to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.

  17. Bayesian wavelet-based analysis of functional magnetic resonance time series.

    Science.gov (United States)

    Costafreda, Sergi G; Barker, Gareth J; Brammer, Michael J

    2009-05-01

    Wavelet methods for image regularization offer a data-driven alternative to Gaussian smoothing in functional magnetic resonance (fMRI) analysis. Their impact has been limited by the difficulties in integrating regularization in the wavelet domain and inference in the image domain, precluding the probabilistic decision on which areas are activated by a task. Here we present an integrated framework for Bayesian estimation and regularization in wavelet space that allows the usual voxelwise hypothesis testing. This framework is flexible, being an adaptation to fMRI time series of a more general wavelet-based functional mixed-effect model. Through testing on a combination of simulated and real fMRI data, we show evidence of improved signal recovery, without compromising test accuracy in image space.

  18. An iterative method for airway segmentation using multiscale leakage detection

    Science.gov (United States)

    Nadeem, Syed Ahmed; Jin, Dakai; Hoffman, Eric A.; Saha, Punam K.

    2017-02-01

    There are growing applications of quantitative computed tomography for assessment of pulmonary diseases by characterizing lung parenchyma as well as the bronchial tree. Many large multi-center studies incorporating lung imaging as a study component are interested in phenotypes relating airway branching patterns, wall-thickness, and other morphological measures. To our knowledge, there are no fully automated airway tree segmentation methods, free of the need for user review. Even when there are failures in a small fraction of segmentation results, the airway tree masks must be manually reviewed for all results which is laborious considering that several thousands of image data sets are evaluated in large studies. In this paper, we present a CT-based novel airway tree segmentation algorithm using iterative multi-scale leakage detection, freezing, and active seed detection. The method is fully automated requiring no manual inputs or post-segmentation editing. It uses simple intensity based connectivity and a new leakage detection algorithm to iteratively grow an airway tree starting from an initial seed inside the trachea. It begins with a conservative threshold and then, iteratively shifts toward generous values. The method was applied on chest CT scans of ten non-smoking subjects at total lung capacity and ten at functional residual capacity. Airway segmentation results were compared to an expert's manually edited segmentations. Branch level accuracy of the new segmentation method was examined along five standardized segmental airway paths (RB1, RB4, RB10, LB1, LB10) and two generations beyond these branches. The method successfully detected all branches up to two generations beyond these segmental bronchi with no visual leakages.

  19. A supervoxel-based segmentation method for prostate MR images

    Science.gov (United States)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  20. Color image segmentation by integrating texture measure into JSEG method

    Science.gov (United States)

    Sheng, Qinghong; Zhang, Jianqing; Xiao, Hui

    2007-11-01

    We present a new color image segmentation method that combined texture measures and the JSEG (J measure based JSEGmentation) algorithm. In particular, two major contributions are set forth in this paper. (1) The two measures defined in JSEG and the Laws texture energy is discussed respectively and then we find that the over-segmentation problem of JSEG could be attributed partly to the absence of color discontinuity in the J measure. (2) A new measure is proposed by integrating the Laws texture energy measures into the J measure, on which our segmentation method is based. The new segmentation method taking account of both textural homogeneity and color discontinuity in local regions can be used to detect proper edges at the boundaries of shadows and highlights. Performance improvement due to the proposed modification was demonstrated on a variety of real color images.

  1. A Finite Segment Method for Skewed Box Girder Analysis

    Directory of Open Access Journals (Sweden)

    Xingwei Xue

    2018-01-01

    Full Text Available A finite segment method is presented to analyze the mechanical behavior of skewed box girders. By modeling the top and bottom plates of the segments with skew plate beam element under an inclined coordinate system and the webs with normal plate beam element, a spatial elastic displacement model for skewed box girder is constructed, which can satisfy the compatibility condition at the corners of the cross section for box girders. The formulation of the finite segment is developed based on the variational principle. The major advantage of the proposed approach, in comparison with the finite element method, is that it can simplify a three-dimensional structure into a one-dimensional structure for structural analysis, which results in significant saving in computational times. At last, the accuracy and efficiency of the proposed finite segment method are verified by a model test.

  2. A Searching Method of Candidate Segmentation Point in SPRINT Classification

    Directory of Open Access Journals (Sweden)

    Zhihao Wang

    2016-01-01

    Full Text Available SPRINT algorithm is a classical algorithm for building a decision tree that is a widely used method of data classification. However, the SPRINT algorithm has high computational cost in the calculation of attribute segmentation. In this paper, an improved SPRINT algorithm is proposed, which searches better candidate segmentation point for the discrete and continuous attributes. The experiment results demonstrate that the proposed algorithm can reduce the computation cost and improve the efficiency of the algorithm by improving the segmentation of continuous attributes and discrete attributes.

  3. An efficient neural network based method for medical image segmentation.

    Science.gov (United States)

    Torbati, Nima; Ayatollahi, Ahmad; Kermani, Ali

    2014-01-01

    The aim of this research is to propose a new neural network based method for medical image segmentation. Firstly, a modified self-organizing map (SOM) network, named moving average SOM (MA-SOM), is utilized to segment medical images. After the initial segmentation stage, a merging process is designed to connect the objects of a joint cluster together. A two-dimensional (2D) discrete wavelet transform (DWT) is used to build the input feature space of the network. The experimental results show that MA-SOM is robust to noise and it determines the input image pattern properly. The segmentation results of breast ultrasound images (BUS) demonstrate that there is a significant correlation between the tumor region selected by a physician and the tumor region segmented by our proposed method. In addition, the proposed method segments X-ray computerized tomography (CT) and magnetic resonance (MR) head images much better than the incremental supervised neural network (ISNN) and SOM-based methods. © 2013 Published by Elsevier Ltd.

  4. Neural cell image segmentation method based on support vector machine

    Science.gov (United States)

    Niu, Shiwei; Ren, Kan

    2015-10-01

    In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells' interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.

  5. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Baofeng Li

    2009-01-01

    Full Text Available Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  6. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Li Baofeng

    2009-01-01

    Full Text Available Abstract Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  7. Feature-oriented multiple description wavelet-based image coding.

    Science.gov (United States)

    Liu, Yilong; Oraintara, Soontorn

    2007-01-01

    We address the problem of resilient image coding over error-prone networks where packet losses occur. Recent literature highlights the multiple description coding (MDC) as a promising approach to solve this problem. In this paper, we introduce a novel wavelet-based multiple description image coder, referred to as the feature-oriented MDC (FO-MDC). The proposed multiple description (MD) coder exploits the statistics of the wavelet coefficients and identifies the subsets of samples that are sensitive to packet loss. A joint optimization between tree-pruning and quantizer selection in the rate-distortion sense is used in order to allocate more bits to these sensitive coefficients. When compared with the state-of-the-art MD scalar quantization coder, the proposed FO-MDC yields a more efficient central-side distortion tradeoff control mechanism. Furthermore, it proves to be more robust for image transmission even with high packet loss ratios, which makes it suitable for protecting multimedia streams over packet-erasure channels.

  8. Wavelet-based characterization of gait signal for neurological abnormalities.

    Science.gov (United States)

    Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S

    2015-02-01

    Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Image segmentation with a finite element method

    DEFF Research Database (Denmark)

    Bourdin, Blaise

    1999-01-01

    regularization results, make possible to imagine a finite element resolution method.In a first time, the Mumford-Shah functional is introduced and some existing results are quoted. Then, a discrete formulation for the Mumford-Shah problem is proposed and its $\\Gamma$-convergence is proved. Finally, some...

  10. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An integrated framework is proposed for efficient prediction of rotorcraft and airframe noise. A novel wavelet-based multiresolution technique and high-order...

  11. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and...

  12. Method for implementation of recursive hierarchical segmentation on parallel computers

    Science.gov (United States)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  13. The Iris biometric feature segmentation using finite element method

    Directory of Open Access Journals (Sweden)

    David Ibitayo LANLEGE

    2015-05-01

    Full Text Available This manuscript presents a method for segmentation of iris images based on a deformable contour (active contour paradigm. The deformable contour is a novel approach in image segmentation. A type of active contour is the Snake. Snake is a parametric curve defined within the domain of the image. Snake properties are specified through a function called energy functional. This means they consist of packets of energy which expressed as partial Differential Equations. The partial Differential Equation is the controlling engine of the active contour since this project, the Finite Element Method (Standard Galerkin Method implementation for deformable model is presented.

  14. Interactive segmentation method with graph cut and SVMs

    Science.gov (United States)

    Zhang, Xing; Tian, Jie; Xiang, Dehui; Wu, Yongfang

    2010-03-01

    Medical image segmentation is a prerequisite for visualization and diagnosis. State-of-the-art techniques of image segmentation concentrate on interactive methods which are more robust than automatic techniques and more efficient than manual delineation. In this paper, we present an interactive segmentation method for medical images which relates to graph cut based on Support Vector Machines (SVMs). The proposed method is a hybrid method that combines three aspects. First, the user selects seed points to paint object and background using a "brush", and then the labeled pixels/voxels data including intensity value and gradient of the sampled points are used as training set for SVMs training process. Second, the trained SVMs model is employed to predict the probability of which classifications each unlabeled pixel/voxel belongs to. Third, unlike traditional Gaussian Mixture Model (GMM) definition for region properties in graph cut method, negative log-likelihood of the obtained probability of each pixel/voxel from SVMs model is used to define t-links in graph cut method and the classical max-flow/min-cut algorithm is applied to minimize the energy function. Finally, the proposed method is applied in 2D and 3D medical image segmentation. The experiment results demonstrate availability and effectiveness of the proposed method.

  15. Image Segmentation Method Using Thresholds Automatically Determined from Picture Contents

    Directory of Open Access Journals (Sweden)

    Yuan Been Chen

    2009-01-01

    Full Text Available Image segmentation has become an indispensable task in many image and video applications. This work develops an image segmentation method based on the modified edge-following scheme where different thresholds are automatically determined according to areas with varied contents in a picture, thus yielding suitable segmentation results in different areas. First, the iterative threshold selection technique is modified to calculate the initial-point threshold of the whole image or a particular block. Second, the quad-tree decomposition that starts from the whole image employs gray-level gradient characteristics of the currently-processed block to decide further decomposition or not. After the quad-tree decomposition, the initial-point threshold in each decomposed block is adopted to determine initial points. Additionally, the contour threshold is determined based on the histogram of gradients in each decomposed block. Particularly, contour thresholds could eliminate inappropriate contours to increase the accuracy of the search and minimize the required searching time. Finally, the edge-following method is modified and then conducted based on initial points and contour thresholds to find contours precisely and rapidly. By using the Berkeley segmentation data set with realistic images, the proposed method is demonstrated to take the least computational time for achieving fairly good segmentation performance in various image types.

  16. A Wavelet-Based Area Parameter for Indirectly Estimating Copper Concentration in Carex Leaves from Canopy Reflectance

    Directory of Open Access Journals (Sweden)

    Junjie Wang

    2015-11-01

    Full Text Available Due to the absence of evident absorption features and low concentrations, the copper (Cu concentration in plant leaves has rarely been estimated from hyperspectral remote sensing data. The capability of remotely-sensed estimation of foliar Cu concentrations largely depends on its close relation to foliar chlorophyll concentration. To enhance the subtle spectral changes related to chlorophyll concentration under Cu stress, this study described a wavelet-based area parameter (SWT (605−720, the sum of reconstructed detail reflectance at fourth decomposition level over 605−720 nm using discrete wavelet transform from the canopy hyperspectral reflectance (350−2500 nm, N = 71 of Carex (C. cinerascens. The results showed that Cu concentrations had negative and strong correlation with chlorophyll concentrations (r = -0.719, p < 0.001. Based on 1000 random dataset partitioning experiments, the 1000 linear calibration models provided a mean R2Val (determination coefficient of validation value of 0.706 and an RPD (residual prediction deviation value of 1.75 for Cu estimation. The bootstrapping and ANOVA test results showed that SWT (605−720 significantly (p < 0.05 outperformed published chlorophyll-related and wavelet-based spectral parameters. It was concluded here that the wavelet-based area parameter (i.e., SWT (605−720 has potential ability to indirectly estimate Cu concentrations in Carex leaves through the strong correlation between Cu and chlorophyll. The method presented in this pilot study may be used to estimate the concentrations of other heavy metals. However, further research is needed to test its transferability and robustness for estimating Cu concentrations on other plant species in different biological and environmental conditions.

  17. MSmetrix: accurate untrained method for longitudinal lesion segmentation

    OpenAIRE

    Jain, Saurabh; Sima, Diana; Maertens, Anke; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2015-01-01

    Jain S., Sima D.M., Maertens A., Van Huffel S., Maes F., Smeets D., ''MSmetrix: accurate untrained method for longitudinal lesion segmentation'', Multiple sclerosis journal, vol. 21 (S11), pp. 193-194, 2015 (31th congress of the European Committee for Treatment and Reseach in Multiple Sclerosis (ECTRIMS), October 7-10, 2015, Barcelona, Spain).

  18. Wavelet-based image fusion in multi-view three-dimensional microscopy.

    Science.gov (United States)

    Rubio-Guivernau, Jose L; Gurchenkov, Vasily; Luengo-Oroz, Miguel A; Duloquin, Louise; Bourgine, Paul; Santos, Andres; Peyrieras, Nadine; Ledesma-Carbayo, Maria J

    2012-01-15

    Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods. The described method was implemented in Matlab (The Mathworks, Inc., USA) and a graphic user interface was developed in Java. The software, together with two sample datasets, is available at http://www.die.upm.es/im/software/SPIMFusionGUI.zip A public release, free of charge for non-commercial use, is planned after the publication of this article.

  19. Multibaseline polarimetric synthetic aperture radar tomography of forested areas using wavelet-based distribution compressive sensing

    Science.gov (United States)

    Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong

    2015-01-01

    The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.

  20. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  1. White matter hyperintensities segmentation: a new semi-automated method

    Directory of Open Access Journals (Sweden)

    Mariangela eIorio

    2013-12-01

    Full Text Available White matter hyperintensities (WMH are brain areas of increased signal on T2-weighted or fluid attenuated inverse recovery magnetic resonance imaging (MRI scans. In this study we present a new semi-automated method to measure WMH load that is based on the segmentation of the intensity histogram of fluid-attenuated inversion recovery images. Thirty patients with Mild Cognitive Impairment with variable WMH load were enrolled. The semi-automated WMH segmentation included: removal of non-brain tissue, spatial normalization, removal of cerebellum and brain stem, spatial filtering, thresholding to segment probable WMH, manual editing for correction of false positives and negatives, generation of WMH map and volumetric estimation of the WMH load. Accuracy was quantitatively evaluated by comparing semi-automated and manual WMH segmentations performed by two independent raters. Differences between the two procedures were assessed using Student’s t tests and similarity was evaluated using linear regression model and Dice Similarity Coefficient (DSC. The volumes of the manual and semi-automated segmentations did not statistically differ (t-value= -1.79, DF=29, p= 0.839 for rater 1; t-value= 1.113, DF=29, p= 0.2749 for rater 2, were highly correlated (R²= 0.921, F (1,29 =155,54, p

  2. Forward solving in Electrical Impedance Tomography with algebraic multigrid wavelet based preconditioners

    Science.gov (United States)

    Borsic, A.; Bayford, R.

    2010-04-01

    Electrical Impedance Tomography is a soft-field tomography modality, where image reconstruction is formulated as a non-linear least-squares model fitting problem. The Newton-Rahson scheme is used for actually reconstructing the image, and this involves three main steps: forward solving, computation of the Jacobian, and the computation of the conductivity update. Forward solving relies typically on the finite element method, resulting in the solution of a sparse linear system. In typical three dimensional biomedical applications of EIT, like breast, prostate, or brain imaging, it is desirable to work with sufficiently fine meshes in order to properly capture the shape of the domain, of the electrodes, and to describe the resulting electric filed with accuracy. These requirements result in meshes with 100,000 nodes or more. The solution the resulting forward problems is computationally intensive. We address this aspect by speeding up the solution of the FEM linear system by the use of efficient numeric methods and of new hardware architectures. In particular, in terms of numeric methods, we solve the forward problem using the Conjugate Gradient method, with a wavelet-based algebraic multigrid (AMG) preconditioner. This preconditioner is faster to set up than other AMG preconditoiners which are not based on wavelets, it does use less memory, and provides for a faster convergence. We report results for a MATLAB based prototype algorithm an we discuss details of a work in progress for a GPU implementation.

  3. Multi-scale event synchronization analysis for unravelling climate processes: a wavelet-based approach

    Science.gov (United States)

    Agarwal, Ankit; Marwan, Norbert; Rathinasamy, Maheswaran; Merz, Bruno; Kurths, Jürgen

    2017-10-01

    The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-)processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.

  4. Experimental validation of wavelet based solution for dynamic response of railway track subjected to a moving train

    Science.gov (United States)

    Koziol, Piotr

    2016-10-01

    New approaches allowing effective analysis of railway structures dynamic behaviour are needed for appropriate modelling and understanding of phenomena associated with train transportation. The literature highlights the fact that nonlinear assumptions are of importance in dynamic analysis of railway tracks. This paper presents wavelet based semi-analytical solution for the infinite Euler-Bernoulli beam resting on a nonlinear foundation and subjected to a set of moving forces, being representation of railway track with moving train, along with its preliminary experimental validation. It is shown that this model, although very simplified, with an assumption of viscous damping of foundation, can be considered as a good enough approximation of realistic structures behaviour. The steady-state response of the beam is obtained by applying the Galilean co-ordinate system and the Adomian's decomposition method combined with coiflet based approximation, leading to analytical estimation of transverse displacements. The applied approach, using parameters taken from real measurements carried out on the Polish Railways network for fast train Pendolino EMU-250, shows ability of the proposed method to analyse parametrically dynamic systems associated with transportation. The obtained results are in accordance with measurement data in wide range of physical parameters, which can be treated as a validation of the developed wavelet based approach. The conducted investigation is supplemented by several numerical examples.

  5. Segmentation of thermographic images of hands using a genetic algorithm

    Science.gov (United States)

    Ghosh, Payel; Mitchell, Melanie; Gold, Judith

    2010-01-01

    This paper presents a new technique for segmenting thermographic images using a genetic algorithm (GA). The individuals of the GA also known as chromosomes consist of a sequence of parameters of a level set function. Each chromosome represents a unique segmenting contour. An initial population of segmenting contours is generated based on the learned variation of the level set parameters from training images. Each segmenting contour (an individual) is evaluated for its fitness based on the texture of the region it encloses. The fittest individuals are allowed to propagate to future generations of the GA run using selection, crossover and mutation. The dataset consists of thermographic images of hands of patients suffering from upper extremity musculo-skeletal disorders (UEMSD). Thermographic images are acquired to study the skin temperature as a surrogate for the amount of blood flow in the hands of these patients. Since entire hands are not visible on these images, segmentation of the outline of the hands on these images is typically performed by a human. In this paper several different methods have been tried for segmenting thermographic images: Gabor-wavelet-based texture segmentation method, the level set method of segmentation and our GA which we termed LSGA because it combines level sets with genetic algorithms. The results show a comparative evaluation of the segmentation performed by all the methods. We conclude that LSGA successfully segments entire hands on images in which hands are only partially visible.

  6. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    Science.gov (United States)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  7. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Science.gov (United States)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the

  8. Performance evaluation of wavelet-based face verification on a PDA recorded database

    Science.gov (United States)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  9. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics

    Science.gov (United States)

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P.; Ismail, Ahmed E.

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  10. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    Directory of Open Access Journals (Sweden)

    Evgeniya eGerasimova

    2014-05-01

    Full Text Available Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development.

  11. Reproducibility of MRI segmentation using a feature space method

    Science.gov (United States)

    Soltanian-Zadeh, Hamid; Windham, Joe P.; Scarpace, Lisa; Murnock, Tanya

    1998-06-01

    This paper presents reproducibility studies for the segmentation results obtained by our optimal MRI feature space method. The steps of the work accomplished are as follows. (1) Eleven patients with brain tumors were imaged by a 1.5 T General Electric Signa MRI System. Four T2- weighted and two T1-weighted images (before and after Gadolinium injection) were acquired for each patient. (2) Images of a slice through the center of the tumor were selected for processing. (3) Patient information was removed from the image headers and new names (unrecognizable by the image analysts) were given to the images. These images were blindly analyzed by the image analysts. (4) Segmentation results obtained by the two image analysts at two time points were compared to assess the reproducibility of the segmentation method. For each tissue segmented in each patient study, a comparison was done by kappa statistics and a similarity measure (an approximation of kappa statistics used by other researchers), to evaluate the number of pixels that were in both of the segmentation results obtained by the two image analysts (agreement) relative to the number of pixels that were not in both (disagreement). An overall agreement comparison was done by finding means and standard deviations of kappa statistics and the similarity measure found for each tissue type in the studies. The kappa statistics for white matter was the largest (0.80) followed by those of gray matter (0.68), partial volume (0.67), total lesion (0.66), and CSF (0.44). The similarity measure showed the same trend but it was always higher than kappa statistics. It was 0.85 for white matter, 0.77 for gray matter, 0.73 for partial volume, 0.72 for total lesion, and 0.47 for CSF.

  12. Realization of Chinese word segmentation based on deep learning method

    Science.gov (United States)

    Wang, Xuefei; Wang, Mingjiang; Zhang, Qiquan

    2017-08-01

    In recent years, with the rapid development of deep learning, it has been widely used in the field of natural language processing. In this paper, I use the method of deep learning to achieve Chinese word segmentation, with large-scale corpus, eliminating the need to construct additional manual characteristics. In the process of Chinese word segmentation, the first step is to deal with the corpus, use word2vec to get word embedding of the corpus, each character is 50. After the word is embedded, the word embedding feature is fed to the bidirectional LSTM, add a linear layer to the hidden layer of the output, and then add a CRF to get the model implemented in this paper. Experimental results show that the method used in the 2014 People's Daily corpus to achieve a satisfactory accuracy.

  13. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    Science.gov (United States)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  14. AN EFFICIENT LEVEL SET METHOD FOR IMAGE SEGMENTATION

    OpenAIRE

    Kunalkumar Muchhadia*; Bhagwat Kakde; Manish Trivedi

    2016-01-01

    Overhere we introduced diffusion term into level set equation for stability of level set function and iteratively solve equation in two steps for quick and better implementation and get required red results. Hence we named it is named as two step splitting method for image segmentation. First iterating the LSE equation and the second step regularizes the level set function obtained in the first step to ensure stability, and thus re-initialization procedure is completely eliminated from LSE.

  15. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    Science.gov (United States)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  16. An efficient iterative thresholding method for image segmentation

    Science.gov (United States)

    Wang, Dong; Li, Haohan; Wei, Xiaoyu; Wang, Xiao-Ping

    2017-12-01

    We proposed an efficient iterative thresholding method for multi-phase image segmentation. The algorithm is based on minimizing piecewise constant Mumford-Shah functional in which the contour length (or perimeter) is approximated by a non-local multi-phase energy. The minimization problem is solved by an iterative method. Each iteration consists of computing simple convolutions followed by a thresholding step. The algorithm is easy to implement and has the optimal complexity O (Nlog ⁡ N) per iteration. We also show that the iterative algorithm has the total energy decaying property. We present some numerical results to show the efficiency of our method.

  17. A Sequential, Implicit, Wavelet-Based Solver for Multi-Scale Time-Dependent Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Donald A. McLaren

    2013-04-01

    Full Text Available This paper describes and tests a wavelet-based implicit numerical method for solving partial differential equations. Intended for problems with localized small-scale interactions, the method exploits the form of the wavelet decomposition to divide the implicit system created by the time-discretization into multiple smaller systems that can be solved sequentially. Included is a test on a basic non-linear problem, with both the results of the test, and the time required to calculate them, compared with control results based on a single system with fine resolution. The method is then tested on a non-trivial problem, its computational time and accuracy checked against control results. In both tests, it was found that the method requires less computational expense than the control. Furthermore, the method showed convergence towards the fine resolution control results.

  18. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  19. Short segment search method for phylogenetic analysis using nested sliding windows

    Science.gov (United States)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  20. Optic disc segmentation: level set methods and blood vessels inpainting

    Science.gov (United States)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  1. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    The bilateral parotid glands were segmented using a registration scheme followed by level set segmentation. A training set consisting of computerized tomography from 10 patients with segmentation of the bilateral glands were used to optimize the parameters of registration and level set segmentation...

  2. BODY COMPOSITION ASSESSMENT WITH SEGMENTAL MULTIFREQUENCY BIOIMPEDANCE METHOD

    Directory of Open Access Journals (Sweden)

    Jukka A. Salmi

    2003-12-01

    Full Text Available Body composition assessment is an important factor in weight management, exercise science and clinical health care. Bioelectrical impedance analysis (BIA is widely used method for estimating body composition. The purpose of this study was to evaluate segmental multi-frequency bioimpedance method (SMFBIA in body composition assessment with underwater weighing (UWW and whole body dual energy x-ray absorptiometry (DXA in healthy obese middle-aged male subjects. The measurements were carried out at the UKK Institute for Health Promotion Research in Tampere, Finland according to standard procedures of BIA, UWW and DXA. Fifty-eight (n=58 male subjects, aged 36-53 years, body mass index (BMI 24.9-40.7, were studied. Of them forty (n=40 underwent also DXA measurement. Fat mass (FM, fat-percentage (F% and fat free mass (FFM were the primary outcome variables. The mean whole body FM (±SD from UWW was 31.5 kg (±7.3. By DXA it was 29.9 kg (±8.1 and by SMFBIA it was 25.5 kg (±7.6, respectively. The Pearson correlation coefficients (r were 0.91 between UWW and SMFBIA, 0.94 between DXA and SMFBIA and 0.91 between UWW and DXA, respectively. The mean segmental FFM (±SD from DXA was 7.7 kg (±1.0 for arms, 41.7 kg (±4.6 for trunk and 21.9 kg (±2.2 for legs. By SMFBIA, it was 8.5 kg (±0.9, 31.7 kg (±2.5 and 20.3 kg (±1.6, respectively. Pearson correlation coefficients were 0.75 for arms, 0.72 for legs and 0.77 for trunk. This study demonstrates that SMFBIA is usefull method to evaluate fat mass (FM, fat free mass (FFM and fat percentage (F% from whole body. Moreover, SMFBIA is suitable method for assessing segmental distribution of fat free mass (FFM compared to whole body DXA. The results of this study indicate that the SMFBIA method may be particularly advantageous in large epidemiological studies as being a simple, rapid and inexpensive method for field use of whole body and segmental body composition assessment

  3. Segmentation of pituitary adenoma: a graph-based method vs. a balloon inflation method.

    Science.gov (United States)

    Egger, Jan; Zukić, Dženan; Freisleben, Bernd; Kolb, Andreas; Nimsky, Christopher

    2013-06-01

    Among all abnormal growths inside the skull, the percentage of tumors in sellar region is approximately 10-15%, and the pituitary adenoma is the most common sellar lesion. A time-consuming process that can be shortened by using adequate algorithms is the manual segmentation of pituitary adenomas. In this contribution, two methods for pituitary adenoma segmentation in the human brain are presented and compared using magnetic resonance imaging (MRI) patient data from the clinical routine: Method A is a graph-based method that sets up a directed and weighted graph and performs a min-cut for optimal segmentation results: Method B is a balloon inflation method that uses balloon inflation forces to detect the pituitary adenoma boundaries. The ground truth of the pituitary adenoma boundaries - for the evaluation of the methods - are manually extracted by neurosurgeons. Comparison is done using the Dice Similarity Coefficient (DSC), a measure for spatial overlap of different segmentation results. The average DSC for all data sets is 77.5±4.5% for the graph-based method and 75.9±7.2% for the balloon inflation method showing no significant difference. The overall segmentation time of the implemented approaches was less than 4s - compared with a manual segmentation that took, on the average, 3.9±0.5min. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. WaVPeak: Picking NMR peaks through wavelet-based smoothing and volume-based filtering

    KAUST Repository

    Liu, Zhi

    2012-02-10

    Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. The Author(s) 2012. Published by Oxford University Press.

  5. Performance of epileptic single-channel scalp EEG classifications using single wavelet-based features.

    Science.gov (United States)

    Janjarasjitt, Suparerk

    2017-03-01

    Classification of epileptic scalp EEGs are certainly ones of the most crucial tasks in diagnosis of epilepsy. Rather than using multiple quantitative features, a single quantitative feature of single-channel scalp EEG is applied for classifying its corresponding state of the brain, i.e., during seizure activity or non-seizure period. The quantitative features proposed are wavelet-based features obtained from the logarithm of variance of detail and approximation coefficients of single-channel scalp EEG signals. The performance on patient-dependent based epileptic seizure classifications using single wavelet-based features are examined on scalp EEG data of 12 children subjects containing 79 seizures. The 4-fold cross validation is applied to evaluate the performance on patient-dependent based epileptic seizure classifications using single wavelet-based features. From the computational results, it is shown that the wavelet-based features can provide an outstanding performance on patient-dependent based epileptic seizure classification. The average accuracy, sensitivity, and specificity of patient-dependent based epileptic seizure classification are, respectively, 93.24%, 83.34%, and 93.53%.

  6. Automatic segmentation of brain images: selection of region extraction methods

    Science.gov (United States)

    Gong, Leiguang; Kulikowski, Casimir A.; Mezrich, Reuben S.

    1991-07-01

    In automatically analyzing brain structures from a MR image, the choice of low level region extraction methods depends on the characteristics of both the target object and the surrounding anatomical structures in the image. The authors have experimented with local thresholding, global thresholding, and other techniques, using various types of MR images for extracting the major brian landmarks and different types of lesions. This paper describes specifically a local- binary thresholding method and a new global-multiple thresholding technique developed for MR image segmentation and analysis. The initial testing results on their segmentation performance are presented, followed by a comparative analysis of the two methods and their ability to extract different types of normal and abnormal brain structures -- the brain matter itself, tumors, regions of edema surrounding lesions, multiple sclerosis lesions, and the ventricles of the brain. The analysis and experimental results show that the global multiple thresholding techniques are more than adequate for extracting regions that correspond to the major brian structures, while local binary thresholding is helpful for more accurate delineation of small lesions such as those produced by MS, and for the precise refinement of lesion boundaries. The detection of other landmarks, such as the interhemispheric fissure, may require other techniques, such as line-fitting. These experiments have led to the formulation of a set of generic computer-based rules for selecting the appropriate segmentation packages for particular types of problems, based on which further development of an innovative knowledge- based, goal directed biomedical image analysis framework is being made. The system will carry out the selection automatically for a given specific analysis task.

  7. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    Science.gov (United States)

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter

  8. Superiority Of Graph-Based Visual Saliency GVS Over Other Image Segmentation Methods

    Directory of Open Access Journals (Sweden)

    Umu Lamboi

    2015-08-01

    Full Text Available Although inherently tedious the segmentation of images and the evaluation of segmented images are critical in computer vision processes. One of the main challenges in image segmentation evaluation arises from the basic conflict between generality and objectivity. For general segmentation purposes the lack of well-defined ground-truth and segmentation accuracy limits the evaluation of specific applications. Subjectivity is the most common method of evaluation of segmentation quality where segmented images are visually compared. This is daunting task however limits the scope of segmentation evaluation to a few predetermined sets of images. As an alternative supervised evaluation compares segmented images against manually-segmented or pre-processed benchmark images. Not only good evaluation methods allow for different comparisons but also for integration with target recognition systems for adaptive selection of appropriate segmentation granularity with improved recognition accuracy. Most of the current segmentation methods still lack satisfactory measures of effectiveness. Thus this study proposed a supervised framework which uses visual saliency detection to quantitatively evaluate image segmentation quality. The new benchmark evaluator uses Graph-based Visual Saliency GVS to compare boundary outputs for manually segmented images. Using the Berkeley Segmentation Database the proposed algorithm was tested against 4 other quantitative evaluation methods Probabilistic Rand Index PRI Variation of Information VOI Global Consistency Error GSE and Boundary Detection Error BDE. Based on the results the GVS approach outperformed any of the other 4 independent standard methods in terms of visual saliency detection of images.

  9. Identification of differentially methylated loci using wavelet-based functional mixed models.

    Science.gov (United States)

    Lee, Wonyul; Morris, Jeffrey S

    2016-03-01

    DNA methylation is a key epigenetic modification that can modulate gene expression. Over the past decade, a lot of studies have focused on profiling DNA methylation and investigating its alterations in complex diseases such as cancer. While early studies were mostly restricted to CpG islands or promoter regions, recent findings indicate that many of important DNA methylation changes can occur in other regions and DNA methylation needs to be examined on a genome-wide scale. In this article, we apply the wavelet-based functional mixed model methodology to analyze the high-throughput methylation data for identifying differentially methylated loci across the genome. Contrary to many commonly-used methods that model probes independently, this framework accommodates spatial correlations across the genome through basis function modeling as well as correlations between samples through functional random effects, which allows it to be applied to many different settings and potentially leads to more power in detection of differential methylation. We applied this framework to three different high-dimensional methylation data sets (CpG Shore data, THREE data and NIH Roadmap Epigenomics data), studied previously in other works. A simulation study based on CpG Shore data suggested that in terms of detection of differentially methylated loci, this modeling approach using wavelets outperforms analogous approaches modeling the loci as independent. For the THREE data, the method suggests newly detected regions of differential methylation, which were not reported in the original study. Automated software called WFMM is available at https://biostatistics.mdanderson.org/SoftwareDownload CpG Shore data is available at http://rafalab.dfci.harvard.edu NIH Roadmap Epigenomics data is available at http://compbio.mit.edu/roadmap Supplementary data are available at Bioinformatics online. jefmorris@mdanderson.org. © The Author 2015. Published by Oxford University Press. All rights reserved

  10. A global genome segmentation method for exploration of epigenetic patterns.

    Directory of Open Access Journals (Sweden)

    Lydia Steiner

    Full Text Available Current genome-wide ChIP-seq experiments on different epigenetic marks aim at unraveling the interplay between their regulation mechanisms. Published evaluation tools, however, allow testing for predefined hypotheses only. Here, we present a novel method for annotation-independent exploration of epigenetic data and their inter-correlation with other genome-wide features. Our method is based on a combinatorial genome segmentation solely using information on combinations of epigenetic marks. It does not require prior knowledge about the data (e.g. gene positions, but allows integrating the data in a straightforward manner. Thereby, it combines compression, clustering and visualization of the data in a single tool. Our method provides intuitive maps of epigenetic patterns across multiple levels of organization, e.g. of the co-occurrence of different epigenetic marks in different cell types. Thus, it facilitates the formulation of new hypotheses on the principles of epigenetic regulation. We apply our method to histone modification data on trimethylation of histone H3 at lysine 4, 9 and 27 in multi-potent and lineage-primed mouse cells, analyzing their combinatorial modification pattern as well as differentiation-related changes of single modifications. We demonstrate that our method is capable of reproducing recent findings of gene centered approaches, e.g. correlations between CpG-density and the analyzed histone modifications. Moreover, combining the clustered epigenetic data with information on the expression status of associated genes we classify differences in epigenetic status of e.g. house-keeping genes versus differentiation-related genes. Visualizing the distribution of modification states on the chromosomes, we discover strong patterns for chromosome X. For example, exclusively H3K9me3 marked segments are enriched, while poised and active states are rare. Hence, our method also provides new insights into chromosome-specific epigenetic

  11. A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

    Science.gov (United States)

    Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

    2014-09-01

    This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

  12. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Ramzan Naeem

    2007-01-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  13. Joint Source-Channel Coding for Wavelet-Based Scalable Video Transmission Using an Adaptive Turbo Code

    Directory of Open Access Journals (Sweden)

    Naeem Ramzan

    2007-03-01

    Full Text Available An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.

  14. A new method named as Segment-Compound method of baffle design

    Science.gov (United States)

    Qin, Xing; Yang, Xiaoxu; Gao, Xin; Liu, Xishuang

    2017-02-01

    As the observation demand increased, the demand of the lens imaging quality rising. Segment- Compound baffle design method was proposed in this paper. Three traditional methods of baffle design they are characterized as Inside to Outside, Outside to Inside, and Mirror Symmetry. Through a transmission type of optical system, the four methods were used to design stray light suppression structure for it, respectively. Then, structures modeling simulation with Solidworks, CAXA, Tracepro, At last, point source transmittance (PST) curve lines were got to describe their performance. The result shows that the Segment- Compound method can inhibit stay light more effectively. Moreover, it is easy to active and without use special material.

  15. Adaptive Image Restoration and Segmentation Method Using Different Neighborhood Sizes

    Directory of Open Access Journals (Sweden)

    Chengcheng Li

    2003-04-01

    Full Text Available The image restoration methods based on the Bayesian's framework and Markov random fields (MRF have been widely used in the image-processing field. The basic idea of all these methods is to use calculus of variation and mathematical statistics to average or estimate a pixel value by the values of its neighbors. After applying this averaging process to the whole image a number of times, the noisy pixels, which are abnormal values, are filtered out. Based on the Tea-trade model, which states that the closer the neighbor, more contribution it makes, almost all of these methods use only the nearest four neighbors for calculation. In our previous research [1, 2], we extended the research on CLRS (image restoration and segmentation by using competitive learning algorithm to enlarge the neighborhood size. The results showed that the longer neighborhood range could improve or worsen the restoration results. We also found that the autocorrelation coefficient was an important factor to determine the proper neighborhood size. We then further realized that the computational complexity increased dramatically along with the enlargement of the neighborhood size. This paper is to further the previous research and to discuss the tradeoff between the computational complexity and the restoration improvement by using longer neighborhood range. We used a couple of methods to construct the synthetic images with the exact correlation coefficients we want and to determine the corresponding neighborhood size. We constructed an image with a range of correlation coefficients by blending some synthetic images. Then an adaptive method to find the correlation coefficients of this image was constructed. We restored the image by applying different neighborhood CLRS algorithm to different parts of the image according to its correlation coefficient. Finally, we applied this adaptive method to some real-world images to get improved restoration results than by using single

  16. Construction and evaluation of a wavelet-based focus measure for microscopy imaging.

    Science.gov (United States)

    Xie, Hui; Rong, Weibin; Sun, Lining

    2007-11-01

    Microscopy imaging can not achieve both high resolution and wide image space simultaneously. Autofocusing is of fundamental importance to automated micromanipulation. This article proposes a new wavelet-based focus measure, which is defined as a ratio of high frequency coefficients and low frequency coefficients. 8 series of 49 microscope images each acquired under five magnifications are used to comprehensively compare the performance of our focus measure with the classic and popular focus measures, including Normalized Variance, Entropy, Energy Laplace and wavelet-based high frequency focus measures. The robustness of these focus measures is evaluated using noisy image sequences corrupted by Gaussian white noise with standard deviations (STD) 5 and 15. An evaluation methodology is proposed, based on which these 5 focus measures are ranked. Experimental results show that the proposed focus measure can provide significantly the best overall performance and robustness. This focus measure can be widely applied to the automated biological and biomedical applications.

  17. Wavelet-based Image Enhancement Using Fourth Order PDE

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren

    2011-01-01

    The presence of noise interference signal may cause problems in signal and image analysis; hence signal and image de-noising is often used as a preprocessing stage in many signal processing applications. In this paper, a new method is presented for image de-noising based on fourth order partial...... differential equations (PDEs) and wavelet transform. In the existing wavelet thresholding methods, the final noise reduced image has limited improvement. It is due to keeping the approximate coefficients of the image unchanged. These coefficients have the main information of the image. Since noise affects both...... the approximate and detail coefficients, in this research, the anisotropic diffusion technique for noise reduction is applied on the approximation band to alleviate the deficiency of the existing wavelet thresholding methods. The proposed method was applied on several standard noisy images and the results...

  18. Particle-in-cell beam dynamics simulations with a wavelet-based Poisson solver

    Directory of Open Access Journals (Sweden)

    Balša Terzić

    2007-03-01

    Full Text Available We report on a successful implementation of a three-dimensional wavelet-based solver for the Poisson equation with Dirichlet boundary conditions, optimized for use in particle-in-cell (PIC simulations. The solver is based on the operator formulation of the conjugate gradient algorithm, for which effectively diagonal preconditioners are available in wavelet bases. Because of the recursive nature of PIC simulations, a good initial approximation to the iterative solution is always readily available, which we demonstrate to be a key advantage in terms of overall computational speed. While the Laplacian remains sparse in a wavelet representation, the wavelet-decomposed potential and density can be rendered sparse through a procedure that amounts to simultaneous compression and denoising of the data. We explain how this procedure can be carried out in a controlled and near-optimal way, and show the effect it has on the overall solver performance. After testing the solver in a stand-alone mode, we integrated it into the IMPACT-T beam dynamics particle-in-cell code and extensively benchmarked it against the IMPACT-T with the native FFT-based Poisson solver. We present and discuss these benchmarking results, as well as the results of modeling the Fermi/NICADD photoinjector using IMPACT-T with the wavelet-based solver.

  19. a Method of Tomato Image Segmentation Based on Mutual Information and Threshold Iteration

    Science.gov (United States)

    Wu, Hongxia; Li, Mingxi

    Threshold Segmentation is a kind of important image segmentation method and one of the important preconditioning steps of image detection and recognition, and it has very broad application during the research scopes of the computer vision. According to the internal relation between segment image and original image, a tomato image automatic optimization segmentation method (MI-OPT) which mutual information associate with optimum threshold iteration was presented. Simulation results show that this method has a better image segmentation effect on the tomato images of mature period and little background color difference or different color.

  20. WaVPeak: picking NMR peaks through wavelet-based smoothing and volume-based filtering.

    Science.gov (United States)

    Liu, Zhi; Abbas, Ahmed; Jing, Bing-Yi; Gao, Xin

    2012-04-01

    Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on (15)N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. WaVPeak is an open source program. The source code and two test spectra of WaVPeak are available at http://faculty.kaust.edu.sa/sites/xingao/Pages/Publications.aspx. The online server is under construction. statliuzhi

  1. Wavelet-based image registration technique for high-resolution remote sensing images

    Science.gov (United States)

    Hong, Gang; Zhang, Yun

    2008-12-01

    Image registration is the process of geometrically aligning one image to another image of the same scene taken from different viewpoints at different times or by different sensors. It is an important image processing procedure in remote sensing and has been studied by remote sensing image processing professionals for several decades. Nevertheless, it is still difficult to find an accurate, robust, and automatic image registration method, and most existing image registration methods are designed for a particular application. High-resolution remote sensing images have made it more convenient for professionals to study the Earth; however, they also create new challenges when traditional processing methods are used. In terms of image registration, a number of problems exist in the registration of high-resolution images: (1) the increased relief displacements, introduced by increasing the spatial resolution and lowering the altitude of the sensors, cause obvious geometric distortion in local areas where elevation variation exists; (2) precisely locating control points in high-resolution images is not as simple as in moderate-resolution images; (3) a large number of control points are required for a precise registration, which is a tedious and time-consuming process; and (4) high data volume often affects the processing speed in the image registration. Thus, the demand for an image registration approach that can reduce the above problems is growing. This study proposes a new image registration technique, which is based on the combination of feature-based matching (FBM) and area-based matching (ABM). A wavelet-based feature extraction technique and a normalized cross-correlation matching and relaxation-based image matching techniques are employed in this new method. Two pairs of data sets, one pair of IKONOS panchromatic images from different times and the other pair of images consisting of an IKONOS panchromatic image and a QuickBird multispectral image, are used to

  2. Wavelet based denoising of power quality events for characterization

    African Journals Online (AJOL)

    The effectiveness of wavelet transform (WT) methods for analyzing different power quality (PQ) events with or without noise has been demonstrated in this paper. Multi-resolution signal decomposition based on discrete WT is used to localize and to classify different power quality disturbances. The energy distribution at ...

  3. Piecewise Tensor Product Wavelet Bases by Extensions and Approximation Rates

    NARCIS (Netherlands)

    Chegini, N.G.; Dahlke, S.; Friedrich, U.; Stevenson, R.; Dahlke, S.; Dahmen, W.; Griebel, M.; Hackbusch, W.; Ritter, K.; Schneider, R.; Schwab, C.; Yserentant, H.

    2014-01-01

    DIn this chapter, we present some of the major results that have been achieved in the context of the DFG-SPP project "Adaptive Wavelet Frame Methods for Operator Equations: Sparse Grids, Vector-Valued Spaces and Applications to Nonlinear Inverse Problems". This project has been concerned with

  4. Performance evaluation of DNA copy number segmentation methods.

    Science.gov (United States)

    Pierre-Jean, Morgane; Rigaill, Guillem; Neuvial, Pierre

    2015-07-01

    A number of bioinformatic or biostatistical methods are available for analyzing DNA copy number profiles measured from microarray or sequencing technologies. In the absence of rich enough gold standard data sets, the performance of these methods is generally assessed using unrealistic simulation studies, or based on small real data analyses. To make an objective and reproducible performance assessment, we have designed and implemented a framework to generate realistic DNA copy number profiles of cancer samples with known truth. These profiles are generated by resampling publicly available SNP microarray data from genomic regions with known copy-number state. The original data have been extracted from dilutions series of tumor cell lines with matched blood samples at several concentrations. Therefore, the signal-to-noise ratio of the generated profiles can be controlled through the (known) percentage of tumor cells in the sample. This article describes this framework and its application to a comparison study between methods for segmenting DNA copy number profiles from SNP microarrays. This study indicates that no single method is uniformly better than all others. It also helps identifying pros and cons of the compared methods as a function of biologically informative parameters, such as the fraction of tumor cells in the sample and the proportion of heterozygous markers. This comparison study may be reproduced using the open source and cross-platform R package jointseg, which implements the proposed data generation and evaluation framework: http://r-forge.r-project.org/R/?group_id=1562. © The Author 2014. Published by Oxford University Press.

  5. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    Science.gov (United States)

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  6. Phase Segmentation Methods for an Automatic Surgical Workflow Analysis

    Directory of Open Access Journals (Sweden)

    Dinh Tuan Tran

    2017-01-01

    Full Text Available In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA and hidden Markov model (HMM approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.

  7. State-of-the-Art Methods for Brain Tissue Segmentation: A Review.

    Science.gov (United States)

    Dora, Lingraj; Agrawal, Sanjay; Panda, Rutuparna; Abraham, Ajith

    2017-01-01

    Brain tissue segmentation is one of the most sought after research areas in medical image processing. It provides detailed quantitative brain analysis for accurate disease diagnosis, detection, and classification of abnormalities. It plays an essential role in discriminating healthy tissues from lesion tissues. Therefore, accurate disease diagnosis and treatment planning depend merely on the performance of the segmentation method used. In this review, we have studied the recent advances in brain tissue segmentation methods and their state-of-the-art in neuroscience research. The review also highlights the major challenges faced during tissue segmentation of the brain. An effective comparison is made among state-of-the-art brain tissue segmentation methods. Moreover, a study of some of the validation measures to evaluate different segmentation methods is also discussed. The brain tissue segmentation, content in terms of methodologies, and experiments presented in this review are encouraging enough to attract researchers working in this field.

  8. Image superresolution of cytology images using wavelet based patch search

    Science.gov (United States)

    Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo

    2015-01-01

    Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.

  9. Multiscale seismic characterization of marine sediments by using a wavelet-based approach

    Science.gov (United States)

    Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique

    2015-04-01

    We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare

  10. Assessing heart rate variability through wavelet-based statistical measures.

    Science.gov (United States)

    Wachowiak, Mark P; Hay, Dean C; Johnson, Michel J

    2016-10-01

    Because of its utility in the investigation and diagnosis of clinical abnormalities, heart rate variability (HRV) has been quantified with both time and frequency analysis tools. Recently, time-frequency methods, especially wavelet transforms, have been applied to HRV. In the current study, a complementary computational approach is proposed wherein continuous wavelet transforms are applied directly to ECG signals to quantify time-varying frequency changes in the lower bands. Such variations are compared for resting and lower body negative pressure (LBNP) conditions using statistical and information-theoretic measures, and compared with standard HRV metrics. The latter confirm the expected lower variability in the LBNP condition due to sympathetic nerve activity (e.g. RMSSD: p=0.023; SDSD: p=0.023; LF/HF: p=0.018). Conversely, using the standard Morlet wavelet and a new transform based on windowed complex sinusoids, wavelet analysis of the ECG within the observed range of heart rate (0.5-1.25Hz) exhibits significantly higher variability, as measured by frequency band roughness (Morlet CWT: p=0.041), entropy (Morlet CWT: p=0.001), and approximate entropy (Morlet CWT: p=0.004). Consequently, this paper proposes that, when used with well-established HRV approaches, time-frequency analysis of ECG can provide additional insights into the complex phenomenon of heart rate variability. Copyright © 2016. Published by Elsevier Ltd.

  11. A High Resolution Remote Sensing Image Segmentation Method by Combining Superpixels with Minimum Spanning Tree

    Directory of Open Access Journals (Sweden)

    DONG Zhipeng

    2017-06-01

    Full Text Available Image segmentation is the basic and key step of object-oriented remote sensing image analysis. Conventional image segmentation method is sensitive to image noise and hard to determine the correct segmentation scale. To solve these problems, a novel image segmentation method by combining superpixels with minimum spanning tree was proposed in this paper. First, the image is over-segmented by simple linear iterative clustering algorithm to obtain superpixels. Then, superpixels are firstly clustered by regionalization with dynamically constrained agglomerative clustering and partitioning algorithm using the initial segmentation number and the sum of squared deviations (SSD, local variance (LV, rate of LV change (ROC-LV index of graphs corresponding to the segmentation number are obtained. So the suitable image segmentation number is determined according to the SSD, LV, ROC-LV index of graphs corresponding to segmentation number. Finally, superpixels are reclustered by regionalization with dynamically constrained agglomerative clustering and partitioning algorithm based on the suitable segmentation number. The experimental results showed that the proposed method can obtain good segmentation results.

  12. A new method of cardiographic image segmentation based on grammar

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  13. Epileptic seizure classifications of single-channel scalp EEG data using wavelet-based features and SVM.

    Science.gov (United States)

    Janjarasjitt, Suparerk

    2017-02-13

    In this study, wavelet-based features of single-channel scalp EEGs recorded from subjects with intractable seizure are examined for epileptic seizure classification. The wavelet-based features extracted from scalp EEGs are simply based on detail and approximation coefficients obtained from the discrete wavelet transform. Support vector machine (SVM), one of the most commonly used classifiers, is applied to classify vectors of wavelet-based features of scalp EEGs into either seizure or non-seizure class. In patient-based epileptic seizure classification, a training data set used to train SVM classifiers is composed of wavelet-based features of scalp EEGs corresponding to the first epileptic seizure event. Overall, the excellent performance on patient-dependent epileptic seizure classification is obtained with the average accuracy, sensitivity, and specificity of, respectively, 0.9687, 0.7299, and 0.9813. The vector composed of two wavelet-based features of scalp EEGs provide the best performance on patient-dependent epileptic seizure classification in most cases, i.e., 19 cases out of 24. The wavelet-based features corresponding to the 32-64, 8-16, and 4-8 Hz subbands of scalp EEGs are the mostly used features providing the best performance on patient-dependent classification. Furthermore, the performance on both patient-dependent and patient-independent epileptic seizure classifications are also validated using tenfold cross-validation. From the patient-independent epileptic seizure classification validated using tenfold cross-validation, it is shown that the best classification performance is achieved using the wavelet-based features corresponding to the 64-128 and 4-8 Hz subbands of scalp EEGs.

  14. A novel 3D wavelet based filter for visualizing features in noisy biological data

    Energy Technology Data Exchange (ETDEWEB)

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  15. A New Wavelet-Based ECG Delineator for the Evaluation of the Ventricular Innervation

    DEFF Research Database (Denmark)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte

    2017-01-01

    T-wave amplitude (TWA) has been proposed as a marker of the innervation of the myocardium. Until now, TWA has been calculated manually or with poor algorithms, thus making its use not efficient in a clinical environment. We introduce a new wavelet-based algorithm for the delineation QRS complexes...... and T-waves, and the automatic calculation of TWA. When validated in the MIT/BIH Arrhythmia database, the QRS detector achieved sensitivity and positive predictive value of 99.84% and 99.87%, respectively. The algorithm was validated also on the QT database and it achieved sensitivity of 99.50% for T...

  16. Fast Multiclass Segmentation using Diffuse Interface Methods on Graphs

    Science.gov (United States)

    2013-02-01

    inpainting ,” IEEE Transactions on Image Processing, vol. 17, no. 5, pp. 657–663, 2008. [21] ——, “Wavelet analogue of the Ginzburg-Landau energy and its Γ... inpainting , image segmentation, cooperative control of robotic vehicles, swarming, and fluid interfaces, and crime modeling. Prof. Bertozzi is a Fellow of both

  17. Wavelet-based study of valence-arousal model of emotions on EEG signals with LabVIEW.

    Science.gov (United States)

    Guzel Aydin, Seda; Kaya, Turgay; Guler, Hasan

    2016-06-01

    This paper illustrates the wavelet-based feature extraction for emotion assessment using electroencephalogram (EEG) signal through graphical coding design. Two-dimensional (valence-arousal) emotion model was studied. Different emotions (happy, joy, melancholy, and disgust) were studied for assessment. These emotions were stimulated by video clips. EEG signals obtained from four subjects were decomposed into five frequency bands (gamma, beta, alpha, theta, and delta) using "db5" wavelet function. Relative features were calculated to obtain further information. Impact of the emotions according to valence value was observed to be optimal on power spectral density of gamma band. The main objective of this work is not only to investigate the influence of the emotions on different frequency bands but also to overcome the difficulties in the text-based program. This work offers an alternative approach for emotion evaluation through EEG processing. There are a number of methods for emotion recognition such as wavelet transform-based, Fourier transform-based, and Hilbert-Huang transform-based methods. However, the majority of these methods have been applied with the text-based programming languages. In this study, we proposed and implemented an experimental feature extraction with graphics-based language, which provides great convenience in bioelectrical signal processing.

  18. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    Science.gov (United States)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  19. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    Science.gov (United States)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  20. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    Directory of Open Access Journals (Sweden)

    Ji Yi

    2017-06-01

    Full Text Available Discrete wavelet transform (WT followed by principal component analysis (PCA has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  1. Graph-Based Method for Multitemporal Segmentation of Sea Ice Floes from Satellite Data

    OpenAIRE

    Price, Claudio; Tarabalka, Yuliya; Brucker, Ludovic

    2013-01-01

    International audience; Automated segmentation of the sea ice evolution would allow scientists studying climate change to build accurate models of the sea ice meltdown process, which is a sensitive climate indicator. In this paper, we propose a novel approach which uses shape analysis and graph-based optimization for segmentation of a multiyear ice floe from time series of satellite images. Differently of the state-of-the-art sea ice segmentation techniques, the new method does not rely on th...

  2. Data on the verification and validation of segmentation and registration methods for diffusion MRI.

    Science.gov (United States)

    Esteban, Oscar; Zosso, Dominique; Daducci, Alessandro; Bach-Cuadra, Meritxell; Ledesma-Carbayo, María J; Thiran, Jean-Philippe; Santos, Andres

    2016-09-01

    The verification and validation of segmentation and registration methods is a necessary assessment in the development of new processing methods. However, verification and validation of diffusion MRI (dMRI) processing methods is challenging for the lack of gold-standard data. The data described here are related to the research article entitled "Surface-driven registration method for the structure-informed segmentation of diffusion MR images" [1], in which publicly available data are used to derive golden-standard reference-data to validate and evaluate segmentation and registration methods in dMRI.

  3. Data on the verification and validation of segmentation and registration methods for diffusion MRI

    Directory of Open Access Journals (Sweden)

    Oscar Esteban

    2016-09-01

    Full Text Available The verification and validation of segmentation and registration methods is a necessary assessment in the development of new processing methods. However, verification and validation of diffusion MRI (dMRI processing methods is challenging for the lack of gold-standard data. The data described here are related to the research article entitled “Surface-driven registration method for the structure-informed segmentation of diffusion MR images” [1], in which publicly available data are used to derive golden-standard reference-data to validate and evaluate segmentation and registration methods in dMRI.

  4. 3D wavelet-based codec for lossy compression of pre-scan-converted ultrasound video

    Science.gov (United States)

    Andrew, Rex K.; Stewart, Brent K.; Langer, Steven G.; Stegbauer, Keith C.

    1999-05-01

    We present a wavelet-based video codec based on a 3D wavelet transformer, a uniform quantizer/dequantizer and an arithmetic encoder/decoder. The wavelet transformer uses biorthogonal Antonini wavelets in the two spatial dimensions and Haar wavelets in the time dimensions. Multiple levels of decomposition are supported. The codec has been applied to pre-scan-converted ultrasound image data and does not produce the type of blocking artifacts that occur in MPEG- compressed video. The PSNR at a given compression rate increases with the number of levels of decomposition: for our data at 50:1 compression, the PSNR increases from 18.4 dB at one level to 24.0 dB at four levels of decomposition. Our 3D wavelet-based video codec provides the high compression rates required to transmit diagnostic ultrasound video over existing low bandwidth links without introducing the blocking artifacts which have been demonstrated to diminish clinical utility.

  5. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  6. An improved method for pancreas segmentation using SLIC and interactive region merging

    Science.gov (United States)

    Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2017-03-01

    Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.

  7. CT image segmentation methods for bone used in medical additive manufacturing.

    Science.gov (United States)

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    Science.gov (United States)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  9. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks

    Directory of Open Access Journals (Sweden)

    Shuihua Wang

    2015-01-01

    Full Text Available Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer’s disease, Parkinson’s diseases, and autism. In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby. We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines.

  10. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    Directory of Open Access Journals (Sweden)

    Zhang Zewei

    2014-01-01

    Full Text Available In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  11. Validation of a training method for L2 continuous-speech segmentation

    NARCIS (Netherlands)

    Cutler, A.; Shanley, J.

    2010-01-01

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for secondlanguage listening, focusing on speech segmentation

  12. An energy minimization method for MS lesion segmentation from T1-w and FLAIR images.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Liu, Yu; Bilello, Michel; Li, Chunming

    2017-06-01

    In this paper, we extend the multiplicative intrinsic component optimization (MICO) algorithm to multichannel MR image segmentation, with focus on segmentation of multiple sclerosis (MS) lesions. The MICO algorithm was originally proposed by Li et al. in Ref. [1] for normal brain tissue segmentation and intensity inhomogeneity correction of a single channel MR image, which exhibits desirable advantages over other methods for MR image segmentation and intensity inhomogeneity correction in terms of segmentation accuracy and robustness. In this paper, we extend the MICO algorithm to multi-channel MR image segmentation and enable the segmentation of MS lesions. We assign different weights for different channels to control the impact of each channel. The weighted channels allow the enhancement of the impact of the FLAIR image on the segmentation of MS lesions by assigning a larger weight to the FLAIR image channel than the other channels. With the inherent mechanism of estimation of the bias field, our method is able to deal with the intensity inhomogeneity in the input multi-channel MR images. In the application of our method, we only use T1-w and FLAIR images as the input two channel MR images. Experimental results show promising result of our method. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  14. An MRI digital brain phantom for validation of segmentation methods.

    Science.gov (United States)

    Alfano, Bruno; Comerci, Marco; Larobina, Michele; Prinster, Anna; Hornak, Joseph P; Selvan, S Easter; Amato, Umberto; Quarantelli, Mario; Tedeschi, Gioacchino; Brunetti, Arturo; Salvatore, Marco

    2011-06-01

    Knowledge of the exact spatial distribution of brain tissues in images acquired by magnetic resonance imaging (MRI) is necessary to measure and compare the performance of segmentation algorithms. Currently available physical phantoms do not satisfy this requirement. State-of-the-art digital brain phantoms also fall short because they do not handle separately anatomical structures (e.g. basal ganglia) and provide relatively rough simulations of tissue fine structure and inhomogeneity. We present a software procedure for the construction of a realistic MRI digital brain phantom. The phantom consists of hydrogen nuclear magnetic resonance spin-lattice relaxation rate (R1), spin-spin relaxation rate (R2), and proton density (PD) values for a 24 × 19 × 15.5 cm volume of a "normal" head. The phantom includes 17 normal tissues, each characterized by both mean value and variations in R1, R2, and PD. In addition, an optional tissue class for multiple sclerosis (MS) lesions is simulated. The phantom was used to create realistic magnetic resonance (MR) images of the brain using simulated conventional spin-echo (CSE) and fast field-echo (FFE) sequences. Results of mono-parametric segmentation of simulations of sequences with different noise and slice thickness are presented as an example of possible applications of the phantom. The phantom data and simulated images are available online at http://lab.ibb.cnr.it/. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. MIA-Clustering: a novel method for segmentation of paleontological material

    Directory of Open Access Journals (Sweden)

    Christopher J. Dunmore

    2018-02-01

    Full Text Available Paleontological research increasingly uses high-resolution micro-computed tomography (μCT to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  16. Segmented metallic nanostructures, homogeneous metallic nanostructures and methods for producing same

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Stanislaus; Koenigsmann, Christopher

    2017-04-18

    The present invention includes a method of producing a segmented 1D nanostructure. The method includes providing a vessel containing a template wherein on one side of the template is a first metal reagent solution and on the other side of the template is a reducing agent solution, wherein the template comprises at least one pore; allowing a first segment of a 1D nanostructure to grow within a pore of the template until a desired length is reached; replacing the first metal reagent solution with a second metal reagent solution; allowing a second segment of a 1D nanostructure to grow from the first segment until a desired length is reached, wherein a segmented 1D nanostructure is produced.

  17. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Han Gao

    2017-10-01

    Full Text Available The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA. Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  18. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images.

    Science.gov (United States)

    Gao, Han; Tang, Yunwei; Jing, Linhai; Li, Hui; Ding, Haifeng

    2017-10-24

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods.

  19. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  20. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    Science.gov (United States)

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  2. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  3. Multichannel EEG compression: wavelet-based image and volumetric coding approach.

    Science.gov (United States)

    Srinivasan, K; Dauwels, J; Ramasubba, M R

    2013-01-01

    In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram signals (EEG) are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of lossy plus residual coding, consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.

  4. Wavelet-based regularization and edge preservation for submillimetre 3D list-mode reconstruction data from a high resolution small animal PET system

    Energy Technology Data Exchange (ETDEWEB)

    Jesus Ochoa Dominguez, Humberto de, E-mail: hochoa@uacj.mx [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico); Ortega Maynez, Leticia; Osiris Vergara Villegas, Osslan; Gordillo Castillo, Nelly; Guadalupe Cruz Sanchez, Vianey; Gutierrez Casas, Efren David [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico)

    2011-10-01

    The data obtained from a PET system tend to be noisy because of the limitations of the current instrumentation and the detector efficiency. This problem is particularly severe in images of small animals as the noise contaminates areas of interest within small organs. Therefore, denoising becomes a challenging task. In this paper, a novel wavelet-based regularization and edge preservation method is proposed to reduce such noise. To demonstrate this method, image reconstruction using a small mouse {sup 18}F NEMA phantom and a {sup 18}F mouse was performed. Investigation on the effects of the image quality was addressed for each reconstruction case. Results show that the proposed method drastically reduces the noise and preserves the image details.

  5. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    Science.gov (United States)

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  6. Path segmentation for beginners: an overview of current methods for detecting changes in animal movement patterns.

    Science.gov (United States)

    Edelhoff, Hendrik; Signer, Johannes; Balkenhol, Niko

    2016-01-01

    Increased availability of high-resolution movement data has led to the development of numerous methods for studying changes in animal movement behavior. Path segmentation methods provide basics for detecting movement changes and the behavioral mechanisms driving them. However, available path segmentation methods differ vastly with respect to underlying statistical assumptions and output produced. Consequently, it is currently difficult for researchers new to path segmentation to gain an overview of the different methods, and choose one that is appropriate for their data and research questions. Here, we provide an overview of different methods for segmenting movement paths according to potential changes in underlying behavior. To structure our overview, we outline three broad types of research questions that are commonly addressed through path segmentation: 1) the quantitative description of movement patterns, 2) the detection of significant change-points, and 3) the identification of underlying processes or 'hidden states'. We discuss advantages and limitations of different approaches for addressing these research questions using path-level movement data, and present general guidelines for choosing methods based on data characteristics and questions. Our overview illustrates the large diversity of available path segmentation approaches, highlights the need for studies that compare the utility of different methods, and identifies opportunities for future developments in path-level data analysis.

  7. Using Wavelet-Based Functional Mixed Models to Characterize Population Heterogeneity in Accelerometer Profiles: A Case Study

    Science.gov (United States)

    Morris, Jeffrey S.; Arroyo, Cassandra; Coull, Brent A.; Ryan, Louise M.; Herrick, Richard; Gortmaker, Steven L.

    2008-01-01

    Summary We present a case study illustrating the challenges of analyzing accelerometer data taken from a sample of children participating in an intervention study designed to increase physical activity. An accelerometer is a small device worn on the hip that records the minute-by-minute activity levels of the child throughout the day for each day it is worn. The resulting data are irregular functions characterized by many peaks representing short bursts of intense activity. We model these data using the wavelet-based functional mixed model. This approach incorporates multiple fixed effect and random effect functions of arbitrary form, the estimates of which are adaptively regularized using wavelet shrinkage. The method yields posterior samples for all functional quantities of the model, which can be used to perform various types of Bayesian inference and prediction. In our case study, a high proportion of the daily activity profiles are incomplete, i.e. have some portion of the profile missing, so cannot be directly modeled using the previously described method. We present a new method for stochastically imputing the missing data that allows us to incorporate these incomplete profiles in our analysis. Our approach borrows strength from both the observed measurements within the incomplete profiles and from other profiles, from the same child as well as other children with similar covariate levels, while appropriately propagating the uncertainty of the imputation throughout all subsequent inference. We apply this method to our case study, revealing some interesting insights into children's activity patterns. We point out some strengths and limitations of using this approach to analyze accelerometer data. PMID:19169424

  8. Wavelet-based evolutionary response of multi-span structures including wave-passage and site-response effects

    NARCIS (Netherlands)

    Dinh, V.N.; Basu, B.; Brinkgreve, R.B.J.

    2013-01-01

    Stochastic seismic wavelet-based evolutionary response of multispan structures including wave-passage and site-response effects is formulated in this paper. A procedure to estimate site-compatible parameters of surface-to-bedrock frequency response function (FRF) by using finite-element analysis of

  9. An image segmentation method for apple sorting and grading using support vector machine and Otsu's method

    Science.gov (United States)

    Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...

  10. A Two-Step Segmentation Method for Breast Ultrasound Masses Based on Multi-resolution Analysis.

    Science.gov (United States)

    Rodrigues, Rafael; Braz, Rui; Pereira, Manuela; Moutinho, José; Pinheiro, Antonio M G

    2015-06-01

    Breast ultrasound images have several attractive properties that make them an interesting tool in breast cancer detection. However, their intrinsic high noise rate and low contrast turn mass detection and segmentation into a challenging task. In this article, a fully automated two-stage breast mass segmentation approach is proposed. In the initial stage, ultrasound images are segmented using support vector machine or discriminant analysis pixel classification with a multiresolution pixel descriptor. The features are extracted using non-linear diffusion, bandpass filtering and scale-variant mean curvature measures. A set of heuristic rules complement the initial segmentation stage, selecting the region of interest in a fully automated manner. In the second segmentation stage, refined segmentation of the area retrieved in the first stage is attempted, using two different techniques. The AdaBoost algorithm uses a descriptor based on scale-variant curvature measures and non-linear diffusion of the original image at lower scales, to improve the spatial accuracy of the ROI. Active contours use the segmentation results from the first stage as initial contours. Results for both proposed segmentation paths were promising, with normalized Dice similarity coefficients of 0.824 for AdaBoost and 0.813 for active contours. Recall rates were 79.6% for AdaBoost and 77.8% for active contours, whereas the precision rate was 89.3% for both methods. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  11. Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging.

    Science.gov (United States)

    García-Lorenzo, Daniel; Francis, Simon; Narayanan, Sridar; Arnold, Douglas L; Collins, D Louis

    2013-01-01

    Magnetic resonance (MR) imaging is often used to characterize and quantify multiple sclerosis (MS) lesions in the brain and spinal cord. The number and volume of lesions have been used to evaluate MS disease burden, to track the progression of the disease and to evaluate the effect of new pharmaceuticals in clinical trials. Accurate identification of MS lesions in MR images is extremely difficult due to variability in lesion location, size and shape in addition to anatomical variability between subjects. Since manual segmentation requires expert knowledge, is time consuming and is subject to intra- and inter-expert variability, many methods have been proposed to automatically segment lesions. The objective of this study was to carry out a systematic review of the literature to evaluate the state of the art in automated multiple sclerosis lesion segmentation. From 1240 hits found initially with PubMed and Google scholar, our selection criteria identified 80 papers that described an automatic lesion segmentation procedure applied to MS. Only 47 of these included quantitative validation with at least one realistic image. In this paper, we describe the complexity of lesion segmentation, classify the automatic MS lesion segmentation methods found, and review the validation methods applied in each of the papers reviewed. Although many segmentation solutions have been proposed, including some with promising results using MRI data obtained on small groups of patients, no single method is widely employed due to performance issues related to the high variability of MS lesion appearance and differences in image acquisition. The challenge remains to provide segmentation techniques that work in all cases regardless of the type of MS, duration of the disease, or MRI protocol, and this within a comprehensive, standardized validation framework. MS lesion segmentation remains an open problem. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Concrete Image Segmentation Based on Multiscale Mathematic Morphology Operators and Otsu Method

    Directory of Open Access Journals (Sweden)

    Sheng-Bo Zhou

    2015-01-01

    Full Text Available The aim of the current study lies in the development of a reformative technique of image segmentation for Computed Tomography (CT concrete images with the strength grades of C30 and C40. The results, through the comparison of the traditional threshold algorithms, indicate that three threshold algorithms and five edge detectors fail to meet the demand of segmentation for Computed Tomography concrete images. The paper proposes a new segmentation method, by combining multiscale noise suppression morphology edge detector with Otsu method, which is more appropriate for the segmentation of Computed Tomography concrete images with low contrast. This method cannot only locate the boundaries between objects and background with high accuracy, but also obtain a complete edge and eliminate noise.

  13. Combining watershed and graph cuts methods to segment organs at risk in radiotherapy

    Science.gov (United States)

    Dolz, Jose; Kirisli, Hortense A.; Viard, Romain; Massoptier, Laurent

    2014-03-01

    Computer-aided segmentation of anatomical structures in medical images is a valuable tool for efficient radiation therapy planning (RTP). As delineation errors highly affect the radiation oncology treatment, it is crucial to delineate geometric structures accurately. In this paper, a semi-automatic segmentation approach for computed tomography (CT) images, based on watershed and graph-cuts methods, is presented. The watershed pre-segmentation groups small areas of similar intensities in homogeneous labels, which are subsequently used as input for the graph-cuts algorithm. This methodology does not require of prior knowledge of the structure to be segmented; even so, it performs well with complex shapes and low intensity. The presented method also allows the user to add foreground and background strokes in any of the three standard orthogonal views - axial, sagittal or coronal - making the interaction with the algorithm easy and fast. Hence, the segmentation information is propagated within the whole volume, providing a spatially coherent result. The proposed algorithm has been evaluated using 9 CT volumes, by comparing its segmentation performance over several organs - lungs, liver, spleen, heart and aorta - to those of manual delineation from experts. A Dicés coefficient higher than 0.89 was achieved in every case. That demonstrates that the proposed approach works well for all the anatomical structures analyzed. Due to the quality of the results, the introduction of the proposed approach in the RTP process will be a helpful tool for organs at risk (OARs) segmentation.

  14. A Split-and-Merge-Based Uterine Fibroid Ultrasound Image Segmentation Method in HIFU Therapy.

    Science.gov (United States)

    Xu, Menglong; Zhang, Dong; Yang, Yan; Liu, Yu; Yuan, Zhiyong; Qin, Qianqing

    2015-01-01

    High-intensity focused ultrasound (HIFU) therapy has been used to treat uterine fibroids widely and successfully. Uterine fibroid segmentation plays an important role in positioning the target region for HIFU therapy. Presently, it is completed by physicians manually, reducing the efficiency of therapy. Thus, computer-aided segmentation of uterine fibroids benefits the improvement of therapy efficiency. Recently, most computer-aided ultrasound segmentation methods have been based on the framework of contour evolution, such as snakes and level sets. These methods can achieve good performance, although they need an initial contour that influences segmentation results. It is difficult to obtain the initial contour automatically; thus, the initial contour is always obtained manually in many segmentation methods. A split-and-merge-based uterine fibroid segmentation method, which needs no initial contour to ensure less manual intervention, is proposed in this paper. The method first splits the image into many small homogeneous regions called superpixels. A new feature representation method based on texture histogram is employed to characterize each superpixel. Next, the superpixels are merged according to their similarities, which are measured by integrating their Quadratic-Chi texture histogram distances with their space adjacency. Multi-way Ncut is used as the merging criterion, and an adaptive scheme is incorporated to decrease manual intervention further. The method is implemented using Matlab on a personal computer (PC) platform with Intel Pentium Dual-Core CPU E5700. The method is validated on forty-two ultrasound images acquired from HIFU therapy. The average running time is 9.54 s. Statistical results showed that SI reaches a value as high as 87.58%, and normHD is 5.18% on average. It has been demonstrated that the proposed method is appropriate for segmentation of uterine fibroids in HIFU pre-treatment imaging and planning.

  15. A Split-and-Merge-Based Uterine Fibroid Ultrasound Image Segmentation Method in HIFU Therapy.

    Directory of Open Access Journals (Sweden)

    Menglong Xu

    Full Text Available High-intensity focused ultrasound (HIFU therapy has been used to treat uterine fibroids widely and successfully. Uterine fibroid segmentation plays an important role in positioning the target region for HIFU therapy. Presently, it is completed by physicians manually, reducing the efficiency of therapy. Thus, computer-aided segmentation of uterine fibroids benefits the improvement of therapy efficiency. Recently, most computer-aided ultrasound segmentation methods have been based on the framework of contour evolution, such as snakes and level sets. These methods can achieve good performance, although they need an initial contour that influences segmentation results. It is difficult to obtain the initial contour automatically; thus, the initial contour is always obtained manually in many segmentation methods. A split-and-merge-based uterine fibroid segmentation method, which needs no initial contour to ensure less manual intervention, is proposed in this paper. The method first splits the image into many small homogeneous regions called superpixels. A new feature representation method based on texture histogram is employed to characterize each superpixel. Next, the superpixels are merged according to their similarities, which are measured by integrating their Quadratic-Chi texture histogram distances with their space adjacency. Multi-way Ncut is used as the merging criterion, and an adaptive scheme is incorporated to decrease manual intervention further. The method is implemented using Matlab on a personal computer (PC platform with Intel Pentium Dual-Core CPU E5700. The method is validated on forty-two ultrasound images acquired from HIFU therapy. The average running time is 9.54 s. Statistical results showed that SI reaches a value as high as 87.58%, and normHD is 5.18% on average. It has been demonstrated that the proposed method is appropriate for segmentation of uterine fibroids in HIFU pre-treatment imaging and planning.

  16. Computational methods for corpus callosum segmentation on MRI: A systematic literature review.

    Science.gov (United States)

    Cover, G S; Herrera, W G; Bento, M P; Appenzeller, S; Rittner, L

    2018-02-01

    The corpus callosum (CC) is the largest white matter structure in the brain and has a significant role in central nervous system diseases. Its volume correlates with the severity and/or extent of neurodegenerative disease. Even though the CC's role has been extensively studied over the last decades, and different algorithms and methods have been published regarding CC segmentation and parcellation, no reviews or surveys covering such developments have been reported so far. To bridge this gap, this paper presents a systematic literature review of computational methods focusing on CC segmentation and parcellation acquired on magnetic resonance imaging. IEEExplore, PubMed, EBSCO Host, and Scopus database were searched with the following search terms: ((Segmentation OR Parcellation) AND (Corpus Callosum) AND (DTI OR MRI OR Diffusion Tensor Imag* OR Diffusion Tractography OR Magnetic Resonance Imag*)), resulting in 802 publications. Two reviewers independently evaluated all articles and 36 studies were selected through the systematic literature review process. This work reviewed four main segmentation methods groups: model-based, region-based, thresholding, and machine learning; 32 different validity metrics were reported. Even though model-based techniques are the most recurrently used for the segmentation task (13 articles), machine learning approaches achieved better outcomes of 95% when analyzing mean values for segmentation and classification metrics results. Moreover, CC segmentation is better established in T 1 -weighted images, having more methods implemented and also being tested in larger datasets, compared with diffusion tensor images. The analyzed computational methods used to perform CC segmentation on magnetic resonance imaging have not yet overcome all presented challenges owing to metrics variability and lack of traceable materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Breast cancer detection using combined curvelet based enhancement and a novel segmentation methods.

    Science.gov (United States)

    Senthilkumar, Balasubramaniam; Umamaheswari, Govindaswamy

    2015-03-01

    This paper describes the digital implementation of a mathematical transform namely 2D Fast Discrete Curvelet Transform (FDCT) via UnequiSpaced Fast Fourier Transform (USFFT) in combination with the novel segmentation method for effective detection of breast cancer. USFFT performs exact reconstructions with high image clarity. Radon, ridgelet and Cartesian filters are included in this method. Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR) were calculated for the image and the resulting value showed that the proposed method performs well on mammogram image in reducing noise with good extraction of edges. This work includes a novel segmentation method, which combines Modified Local Range Modification (MLRM) and Laplacian of Gaussian (LoG) edge detection method to segment the textured features in the mammogram image. The result was analyzed using a Receiver Operating Characteristics (ROC) plot and the detection accuracy found was 99% which is good compared to existing methods.

  18. A New Method for Segmentation of Multiple Sclerosis (MS Lesions on Brain MR Images

    Directory of Open Access Journals (Sweden)

    Simin Jafari

    2015-07-01

    Full Text Available Automatic segmentation of multiple sclerosis (MS lesions in brain MRI has been widely investigated in recent years with the goal of helping MS diagnosis and patient follow-up. In this study we applied gaussian mixture model (GMM to segment MS lesions in MR images. Usually, GMM is optimized using expectation-maximization (EM algorithm. One of the drawbacks of this optimization method is that, it does not convergence to optimal maximum or minimum. Starting from different initial points and saving best result, is a strategy which is used to reach the near optimal. This approach is time consuming and we used another way to initiate the EM algorithm. Also, FAST- Trimmed Likelihood Estimator (FAST-TLE algorithm was applied to determine which voxels should be rejected. The automatically segmentation outputs were scored by two specialists and the results show that our method has capability to segment the MS lesions with Dice similarity coefficient (DSC score of 0.82.

  19. Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    Science.gov (United States)

    2014-01-01

    WebKB Method Accuracy vector method [12] 64.47% k-nearest neighbors (k = 10) [12] 72.56% centroid (normalized sum) [12] 82.66% naive Bayes [12] 83.52...scheme, which alternates between diffusion and thresholding. We demonstrate the performance of both algorithms experimentally on synthetic data, image ...Landau functional, diffuse interface, MBO scheme, graphs, convex splitting, image processing, high-dimensional data. I. INTRODUCTION Multiclass

  20. A METHOD OF LEUKOCYTE SEGMENTATION BASED ON S COMPONENT AND B COMPONENT IMAGES

    OpenAIRE

    YIPING YANG; YIPING CAO; WENXIAN SHI

    2014-01-01

    A leukocyte segmentation method based on S component and B component images is proposed. Threshold segmentation operation is applied to get two binary images in S component and B component images. The samples used in this study are peripheral blood smears. It is easy to find from the two binary images that gray values are the same at every corresponding pixels in the leukocyte cytoplasm region, but opposite in the other regions. The feature shows that "IMAGE AND" operation can be employed on ...

  1. A Hybrid Method for Segmentation and Visualization of Teeth in Multi-Slice CT scan Images

    Directory of Open Access Journals (Sweden)

    Mohammad Hosntalab

    2009-12-01

    Full Text Available Introduction: Various computer assisted medical procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries require automatic quantification and volumetric visualization of teeth. In this regard, segmentation is a major step. Material and Methods: In this paper, inspired by our previous experiences and considering the anatomical knowledge of teeth and jaws, we propose a hybrid technique for teeth segmentation and visualization in CT volumetric data. The major steps of the proposed techniques are as follows: (1 Separation of teeth in CT dataset; (2 Initial segmentation of teeth in panoramic projection; (3 Final segmentation of teeth in CT dataset; (4 3D visualization of teeth. Results: The proposed algorithm was evaluated in 30 multi-slice CT datasets. Segmented images were compared with manually outlined contours. In order to evaluate the proposed method, we utilized several common performance measures such as sensitivity, specificity, precision, accuracy and mean error rate. The experimental results reveal the effectiveness of the proposed method. Discussion and Conclusion: In the proposed algorithm, the variationallevel set technique was utilized to trace the contour of the teeth. In view of the fact that this technique is based on the characteristics of the overall region of the tooth image, it is possible to extract a very smooth and accurate tooth contour using this technique. For the available datasets, the proposed technique was more successful in teeth segmentation compared to previous techniques.

  2. A Spatial Shape Constrained Clustering Method for Mammographic Mass Segmentation

    Directory of Open Access Journals (Sweden)

    Jian-Yong Lou

    2015-01-01

    error of 7.18% for well-defined masses (or 8.06% for ill-defined masses was obtained by using DACF on MiniMIAS database, with 5.86% (or 5.55% and 6.14% (or 5.27% improvements as compared to the standard DA and fuzzy c-means methods.

  3. Image segmentation and particles classification using texture analysis method

    Directory of Open Access Journals (Sweden)

    Mayar Aly Atteya

    Full Text Available Introduction: Ingredients of oily fish include a large amount of polyunsaturated fatty acids, which are important elements in various metabolic processes of humans, and have also been used to prevent diseases. However, in an attempt to reduce cost, recent developments are starting a replace the ingredients of fish oil with products of microalgae, that also produce polyunsaturated fatty acids. To do so, it is important to closely monitor morphological changes in algae cells and monitor their age in order to achieve the best results. This paper aims to describe an advanced vision-based system to automatically detect, classify, and track the organic cells using a recently developed SOPAT-System (Smart On-line Particle Analysis Technology, a photo-optical image acquisition device combined with innovative image analysis software. Methods The proposed method includes image de-noising, binarization and Enhancement, as well as object recognition, localization and classification based on the analysis of particles’ size and texture. Results The methods allowed for correctly computing cell’s size for each particle separately. By computing an area histogram for the input images (1h, 18h, and 42h, the variation could be observed showing a clear increase in cell. Conclusion The proposed method allows for algae particles to be correctly identified with accuracies up to 99% and classified correctly with accuracies up to 100%.

  4. Intravitreal Triamcinolone in Posterior Segment Diseases – Method ...

    African Journals Online (AJOL)

    DR OLULEYE

    primary treatment of macular oedema from diabetic retinopathy, retinal vein occlusion and posterior uveitis. It has also been found to be useful in cystoid macular oedema, idiopathic juxtafoveal telangiectasia and neovascular age- related macular degeneration.3-10. METHOD OF INTRAVITREAL ADMINISTRATION.

  5. Control of equipment isolation system using wavelet-based hybrid sliding mode control

    Science.gov (United States)

    Huang, Shieh-Kung; Loh, Chin-Hsiung

    2017-04-01

    -structural components. The aim of this paper is to develop a hybrid control algorithm on the control of both structures and equipments simultaneously to overcome the limitations of classical feedback control through combining the advantage of classic LQR and SMC. To suppress vibrations with the frequency contents of strong earthquakes differing from the natural frequencies of civil structures, the hybrid control algorithms integrated with the wavelet-base vibration control algorithm is developed. The performance of classical, hybrid, and wavelet-based hybrid control algorithms as well as the responses of structure and non-structural components are evaluated and discussed through numerical simulation in this study.

  6. A coastal zone segmentation variational model and its accelerated ADMM method

    Science.gov (United States)

    Huang, Baoxiang; Chen, Ge; Zhang, Xiaolei; Yang, Huan

    2017-12-01

    Effective and efficient SAR image segmentation has a significant role in coastal zone interpretation. In this paper, a coastal zone segmentation model is proposed based on Potts model. By introducing edge self-adaption parameter and modifying noisy data term, the proposed variational model provides a good solution for the coastal zone SAR image with common characteristics of inherent speckle noise and complicated geometrical details. However, the proposed model is difficult to solve due to to its nonlinear, non-convex and non-smooth characteristics. Followed by curve evolution theory and operator splitting method, the minimization problem is reformulated as a constrained minimization problem. A fast alternating minimization iterative scheme is designed to implement coastal zone segmentation. Finally, various two-stage and multiphase experimental results illustrate the advantage of the proposed segmentation model, and indicate the high computation efficiency of designed numerical approximation algorithm.

  7. Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.

    Science.gov (United States)

    Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel

    2017-08-22

    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

  8. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  9. The Segmented Prony method for the analysis of non-stationary time series

    Science.gov (United States)

    Barone, P.; Massaro, E.; Polichetti, A.

    1989-01-01

    An extension of the classic Prony method for fitting a sum of damped sinusoids to experimental data is proposed. This new method, called the Segmented Prony's Method or SPM, is particularly suited to studying time-varying phenomena. It is based on a division of the observational interval into several short segments, where the signal can be assumed stationary. A model for the data, dependent on a set of parameters, is estimated by means of an efficient algorithm based on well-known recursions for the estimation of the autoregressive coefficients. An application to an X-ray observation of the active galaxy NGC 7314 is discussed.

  10. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    Science.gov (United States)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  11. A segmentation method based on HMRF for the aided diagnosis of acute myeloid leukemia.

    Science.gov (United States)

    Su, Jie; Liu, Shuai; Song, Jinming

    2017-12-01

    The diagnosis of acute myeloid leukemia (AML) is purely dependent on counting the percentages of blasts (>20%) in the peripheral blood or bone marrow. Manual microscopic examination of peripheral blood or bone marrow aspirate smears is time consuming and less accurate. The first and very important step in blast recognition is the segmentation of the cells from the background for further cell feature extraction and cell classification. In this paper, we aimed to utilize computer technologies in image analysis and artificial intelligence to develop an automatic program for blast recognition and counting in the aspirate smears. We proposed a method to analyze the aspirate smear images, which first performs segmentation of the cells by k-means cluster, then builds cell image representing model by HMRF (Hidden-Markov Random Field), estimates model parameters through probability of EM (expectation maximization), carries out convergence iteration until optimal value, and finally achieves second stage refined segmentation. Furthermore, the segmentation results are compared with several other methods using six classes of cells respectively. The proposed method was applied to six groups of cells from 61 bone marrow aspirate images, and compared with other algorithms for its performance on the analysis of the whole images, the segmentation of nucleus, and the efficiency of calculation. It showed improved segmentation results in both the cropped images and the whole images, which provide the base for down-stream cell feature extraction and identification. Segmentation of the aspirate smear images using the proposed method helps the analyst in differentiating six groups of cells and in the determination of blasts counting, which will be of great significance for the diagnosis of acute myeloid leukemia. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An analysis of methods for the selection of atlases for use in medical image segmentation

    Science.gov (United States)

    Prescott, Jeffrey W.; Best, Thomas M.; Haq, Furqan; Jackson, Rebecca; Gurcan, Metin

    2010-03-01

    The use of atlases has been shown to be a robust method for segmentation of medical images. In this paper we explore different methods of selection of atlases for the segmentation of the quadriceps muscles in magnetic resonance (MR) images, although the results are pertinent for a wide range of applications. The experiments were performed using 103 images from the Osteoarthritis Initiative (OAI). The images were randomly split into a training set consisting of 50 images and a testing set of 53 images. Three different atlas selection methods were systematically compared. First, a set of readers was assigned the task of selecting atlases from a training population of images, which were selected to be representative subgroups of the total population. Second, the same readers were instructed to select atlases from a subset of the training data which was stratified based on population modes. Finally, every image in the training set was employed as an atlas, with no input from the readers, and the atlas which had the best initial registration, judged by an appropriate registration metric, was used in the final segmentation procedure. The segmentation results were quantified using the Zijdenbos similarity index (ZSI). The results show that over all readers the agreement of the segmentation algorithm decreased from 0.76 to 0.74 when using population modes to assist in atlas selection. The use of every image in the training set as an atlas outperformed both manual atlas selection methods, achieving a ZSI of 0.82.

  13. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  14. Vascular segmentation in hepatic CT images using adaptive threshold fuzzy connectedness method.

    Science.gov (United States)

    Guo, Xiaoxi; Huang, Shaohui; Fu, Xiaozhu; Wang, Boliang; Huang, Xiaoyang

    2015-06-19

    Fuzzy connectedness method has shown its effectiveness for fuzzy object extraction in recent years. However, two problems may occur when applying it to hepatic vessel segmentation task. One is the excessive computational cost, and the other is the difficulty of choosing a proper threshold value for final segmentation. In this paper, an accelerated strategy based on a lookup table was presented first which can reduce the connectivity scene calculation time and achieve a speed-up factor of above 2. When the computing of the fuzzy connectedness relations is finished, a threshold is needed to generate the final result. Currently the threshold is preset by users. Since different thresholds may produce different outcomes, how to determine a proper threshold is crucial. According to our analysis of the hepatic vessel structure, a watershed-like method was used to find the optimal threshold. Meanwhile, by using Ostu algorithm to calculate the parameters for affinity relations and assigning the seed with the mean value, it is able to reduce the influence on the segmentation result caused by the location of the seed and enhance the robustness of fuzzy connectedness method. Experiments based on four different datasets demonstrate the efficiency of the lookup table strategy. These experiments also show that an adaptive threshold found by watershed-like method can always generate correct segmentation results of hepatic vessels. Comparing to a refined region-growing algorithm that has been widely used for hepatic vessel segmentation, fuzzy connectedness method has advantages in detecting vascular edge and generating more than one vessel system through the weak connectivity of the vessel ends. An improved algorithm based on fuzzy connectedness method is proposed. This algorithm has improved the performance of fuzzy connectedness method in hepatic vessel segmentation.

  15. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    Energy Technology Data Exchange (ETDEWEB)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L., E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece); Vassiou, K. [Department of Anatomy, School of Medicine, University of Thessaly, Larissa 41500 (Greece)

    2015-10-15

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing

  16. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    Science.gov (United States)

    Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  17. A study on discrete wavelet-based noise removal from EEG signals.

    Science.gov (United States)

    Asaduzzaman, K; Reaz, M B I; Mohd-Yasin, F; Sim, K S; Hussain, M S

    2010-01-01

    Electroencephalogram (EEG) serves as an extremely valuable tool for clinicians and researchers to study the activity of the brain in a non-invasive manner. It has long been used for the diagnosis of various central nervous system disorders like seizures, epilepsy, and brain damage and for categorizing sleep stages in patients. The artifacts caused by various factors such as Electrooculogram (EOG), eye blink, and Electromyogram (EMG) in EEG signal increases the difficulty in analyzing them. Discrete wavelet transform has been applied in this research for removing noise from the EEG signal. The effectiveness of the noise removal is quantitatively measured using Root Mean Square (RMS) Difference. This paper reports on the effectiveness of wavelet transform applied to the EEG signal as a means of removing noise to retrieve important information related to both healthy and epileptic patients. Wavelet-based noise removal on the EEG signal of both healthy and epileptic subjects was performed using four discrete wavelet functions. With the appropriate choice of the wavelet function (WF), it is possible to remove noise effectively to analyze EEG significantly. Result of this study shows that WF Daubechies 8 (db8) provides the best noise removal from the raw EEG signal of healthy patients, while WF orthogonal Meyer does the same for epileptic patients. This algorithm is intended for FPGA implementation of portable biomedical equipments to detect different brain state in different circumstances.

  18. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    Science.gov (United States)

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals.

  19. A New Wavelet-Based ECG Delineator for the Evaluation of the Ventricular Innervation.

    Science.gov (United States)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte; Sorensen, Helge Bjarup Dissing

    2017-01-01

    T-wave amplitude (TWA) has been proposed as a marker of the innervation of the myocardium. Until now, TWA has been calculated manually or with poor algorithms, thus making its use not efficient in a clinical environment. We introduce a new wavelet-based algorithm for the delineation QRS complexes and T-waves, and the automatic calculation of TWA. When validated in the MIT/BIH Arrhythmia database, the QRS detector achieved sensitivity and positive predictive value of 99.84% and 99.87%, respectively. The algorithm was validated also on the QT database and it achieved sensitivity of 99.50% for T-peak detection. In addition, the algorithm achieved delineation accuracy that is similar to the differences in delineation between expert cardiologists. We applied the algorithm for the evaluation of the influence in TWA of anticholinergic and antiadrenergic drugs (i.e., atropine and metoprolol) for healthy subjects. We found that the TWA decreased significantly with atropine and that metoprolol caused a significant increase in TWA, thus confirming the clinical hypothesis that the TWA is a marker of the innervation of the myocardium. The results of this paper show that the proposed algorithm can be used as a useful and efficient tool in clinical practice for the automatic calculation of TWA and its interpretation as a non-invasive marker of the autonomic ventricular innervation.

  20. Investigation of the scaling characteristics of LANDSAT temperature and vegetation data: a wavelet-based approach.

    Science.gov (United States)

    Rathinasamy, Maheswaran; Bindhu, V M; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh

    2017-10-01

    An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.

  1. Investigation of the scaling characteristics of LANDSAT temperature and vegetation data: a wavelet-based approach

    Science.gov (United States)

    Rathinasamy, Maheswaran; Bindhu, V. M.; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh

    2017-10-01

    An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.

  2. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Angel D. Sappa

    2016-06-01

    Full Text Available This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR and Long Wave InfraRed (LWIR.

  3. A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver

    KAUST Repository

    Liu, Yang

    2015-10-26

    © 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT)-based surface integral equation (SIE) solvers, it reduces the computational and memory costs of transient analysis from equation and equation to equation and equation, respectively, where Nt and Ns denote the number of temporal and spatial unknowns (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). In the past, PWTD-accelerated MOT-SIE solvers have been applied to transient problems involving half million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). Recently, a scalable parallel PWTD-accelerated MOT-SIE solver that leverages a hiearchical parallelization strategy has been developed and successfully applied to the transient problems involving ten million spatial unknowns (Liu et. al., in URSI Digest, 2013). We further enhanced the capabilities of this solver by implementing a compression scheme based on local cosine wavelet bases (LCBs) that exploits the sparsity in the temporal dimension (Liu et. al., in URSI Digest, 2014). Specifically, the LCB compression scheme was used to reduce the memory requirement of the PWTD ray data and computational cost of operations in the PWTD translation stage.

  4. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    Science.gov (United States)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  5. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  6. Supervised method to build an atlas database for multi-atlas segmentation-propagation

    Science.gov (United States)

    Shen, Kaikai; Bourgeat, Pierrick; Fripp, Jurgen; Mériaudeau, Fabrice; Ames, David; Ellis, Kathryn A.; Masters, Colin L.; Villemagne, Victor L.; Rowe, Christopher C.; Salvado, Olivier

    2010-03-01

    Multiatlas based segmentation-propagation approaches have been shown to obtain accurate parcelation of brain structures. However, this approach requires a large number of manually delineated atlases, which are often not available. We propose a supervised method to build a population specific atlas database, using the publicly available Internet Brain Segmentation Repository (IBSR). The set of atlases grows iteratively as new atlases are added, so that its segmentation capability may be enhanced in the multiatlas based approach. Using a dataset of 210 MR images of elderly subjects (170 elderly control, 40 Alzheimer's disease) from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study, 40 MR images were segmented to build a population specific atlas database for the purpose of multiatlas segmentation-propagation. The population specific atlases were used to segment the elderly population of 210 MR images, and were evaluated in terms of the agreement among the propagated labels. The agreement was measured by using the entropy H of the probability image produced when fused by voting rule and the partial moment μ2 of the histogram. Compared with using IBSR atlases, the population specific atlases obtained a higher agreement when dealing with images of elderly subjects.

  7. Comparison of unsupervised classification methods for brain tumor segmentation using multi-parametric MRI

    Directory of Open Access Journals (Sweden)

    N. Sauwen

    2016-01-01

    Full Text Available Tumor segmentation is a particularly challenging task in high-grade gliomas (HGGs, as they are among the most heterogeneous tumors in oncology. An accurate delineation of the lesion and its main subcomponents contributes to optimal treatment planning, prognosis and follow-up. Conventional MRI (cMRI is the imaging modality of choice for manual segmentation, and is also considered in the vast majority of automated segmentation studies. Advanced MRI modalities such as perfusion-weighted imaging (PWI, diffusion-weighted imaging (DWI and magnetic resonance spectroscopic imaging (MRSI have already shown their added value in tumor tissue characterization, hence there have been recent suggestions of combining different MRI modalities into a multi-parametric MRI (MP-MRI approach for brain tumor segmentation. In this paper, we compare the performance of several unsupervised classification methods for HGG segmentation based on MP-MRI data including cMRI, DWI, MRSI and PWI. Two independent MP-MRI datasets with a different acquisition protocol were available from different hospitals. We demonstrate that a hierarchical non-negative matrix factorization variant which was previously introduced for MP-MRI tumor segmentation gives the best performance in terms of mean Dice-scores for the pathologic tissue classes on both datasets.

  8. Comparison of unsupervised classification methods for brain tumor segmentation using multi-parametric MRI.

    Science.gov (United States)

    Sauwen, N; Acou, M; Van Cauter, S; Sima, D M; Veraart, J; Maes, F; Himmelreich, U; Achten, E; Van Huffel, S

    2016-01-01

    Tumor segmentation is a particularly challenging task in high-grade gliomas (HGGs), as they are among the most heterogeneous tumors in oncology. An accurate delineation of the lesion and its main subcomponents contributes to optimal treatment planning, prognosis and follow-up. Conventional MRI (cMRI) is the imaging modality of choice for manual segmentation, and is also considered in the vast majority of automated segmentation studies. Advanced MRI modalities such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI) and magnetic resonance spectroscopic imaging (MRSI) have already shown their added value in tumor tissue characterization, hence there have been recent suggestions of combining different MRI modalities into a multi-parametric MRI (MP-MRI) approach for brain tumor segmentation. In this paper, we compare the performance of several unsupervised classification methods for HGG segmentation based on MP-MRI data including cMRI, DWI, MRSI and PWI. Two independent MP-MRI datasets with a different acquisition protocol were available from different hospitals. We demonstrate that a hierarchical non-negative matrix factorization variant which was previously introduced for MP-MRI tumor segmentation gives the best performance in terms of mean Dice-scores for the pathologic tissue classes on both datasets.

  9. Quantitative assessment of MS plaques and brain atrophy in multiple sclerosis using semiautomatic segmentation method

    Science.gov (United States)

    Heinonen, Tomi; Dastidar, Prasun; Ryymin, Pertti; Lahtinen, Antti J.; Eskola, Hannu; Malmivuo, Jaakko

    1997-05-01

    Quantitative magnetic resonance (MR) imaging of the brain is useful in multiple sclerosis (MS) in order to obtain reliable indices of disease progression. The goal of this project was to estimate the total volume of gliotic and non gliotic plaques in chronic progressive multiple sclerosis with the help of a semiautomatic segmentation method developed at the Ragnar Granit Institute. Youth developed program running on a PC based computer provides de displays of the segmented data, in addition to the volumetric analyses. The volumetric accuracy of the program was demonstrated by segmenting MR images of fluid filed syringes. An anatomical atlas is to be incorporated in the segmentation system to estimate the distribution of MS plaques in various neural pathways of the brain. A total package including MS plaque volume estimation, estimation of brain atrophy and ventricular enlargement, distribution of MS plaques in different neural segments of the brain has ben planned for the near future. Our study confirmed that total lesion volumes in chronic MS disease show a poor correlation to EDSS scores but show a positive correlation to neuropsychological scores. Therefore accurate total volume measurements of MS plaques using the developed semiautomatic segmentation technique helped us to evaluate the degree of neuropsychological impairment.

  10. Segmentation of rodent whole-body dynamic PET images: an unsupervised method based on voxel dynamics

    DEFF Research Database (Denmark)

    Maroy, Renaud; Boisgard, Raphaël; Comtat, Claude

    2008-01-01

    Positron emission tomography (PET) is a useful tool for pharmacokinetics studies in rodents during the preclinical phase of drug and tracer development. However, rodent organs are small as compared to the scanner's intrinsic resolution and are affected by physiological movements. We present a new...... method for the segmentation of rodent whole-body PET images that takes these two difficulties into account by estimating the pharmacokinetics far from organ borders. The segmentation method proved efficient on whole-body numerical rat phantom simulations, including 3-14 organs, together...

  11. Wavelet-based study of valence?arousal model of emotions on EEG signals with LabVIEW

    OpenAIRE

    Guzel Aydin, Seda; Kaya, Turgay; Guler, Hasan

    2016-01-01

    This paper illustrates the wavelet-based feature extraction for emotion assessment using electroencephalogram (EEG) signal through graphical coding design. Two-dimensional (valence?arousal) emotion model was studied. Different emotions (happy, joy, melancholy, and disgust) were studied for assessment. These emotions were stimulated by video clips. EEG signals obtained from four subjects were decomposed into five frequency bands (gamma, beta, alpha, theta, and delta) using ?db5? wavelet functi...

  12. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    Science.gov (United States)

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  13. Monte Carlo methods for optimizing the piecewise constant Mumford-Shah segmentation model

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hiroshi; Sashida, Satoshi; Okabe, Yutaka [Department of Physics, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397 (Japan); Lee, Hwee Kuan, E-mail: leehk@bii.a-star.edu.sg [Bioinformatics Institute, 30 Biopolis Street, No. 07-01, Matrix, Singapore 138671 (Singapore)

    2011-02-15

    Natural images are depicted in a computer as pixels on a square grid and neighboring pixels are generally highly correlated. This representation can be mapped naturally to a statistical physics framework on a square lattice. In this paper, we developed an effective use of statistical mechanics to solve the image segmentation problem, which is an outstanding problem in image processing. Our Monte Carlo method using several advanced techniques, including block-spin transformation, Eden clustering and simulated annealing, seeks the solution of the celebrated Mumford-Shah image segmentation model. In particular, the advantage of our method is prominent for the case of multiphase segmentation. Our results verify that statistical physics can be a very efficient approach for image processing.

  14. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2015-01-01

    Full Text Available The key problem of computer-aided diagnosis (CAD of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO pulmonary nodules than other typical algorithms.

  15. Segmentation of Brain MRI Using SOM-FCM-Based Method and 3D Statistical Descriptors

    Directory of Open Access Journals (Sweden)

    Andrés Ortiz

    2013-01-01

    Full Text Available Current medical imaging systems provide excellent spatial resolution, high tissue contrast, and up to 65535 intensity levels. Thus, image processing techniques which aim to exploit the information contained in the images are necessary for using these images in computer-aided diagnosis (CAD systems. Image segmentation may be defined as the process of parcelling the image to delimit different neuroanatomical tissues present on the brain. In this paper we propose a segmentation technique using 3D statistical features extracted from the volume image. In addition, the presented method is based on unsupervised vector quantization and fuzzy clustering techniques and does not use any a priori information. The resulting fuzzy segmentation method addresses the problem of partial volume effect (PVE and has been assessed using real brain images from the Internet Brain Image Repository (IBSR.

  16. A Simulation Study on Segmentation Methods of the Soil Aggregate Microtomographic Images

    Science.gov (United States)

    Wang, W.; Kravchenko, A.; Ananyeva, K.; Smucker, A.; Lim, C.; Rivers, M.

    2009-05-01

    Advances in X-ray microtomography open up a new way for examining the internal structures of soil aggregates in 3D space with a resolution of only several microns. However, processing of X-ray soil images in order to obtain reliable representations of pore geometries within aggregate pore remain to be established. Multiple approaches to the segmentation algorithms used to best separate gray-scale images into pores and solid material. Segmentation of soil volumes requires a combination of multiple interactive algorithms that identify specific properties of the studied features of each volume. Additionally, similar 3D objects with known pore geometries and connectivities are needed to provide specific information that identifies the most accurate segmentation of microtomographic images. The objective of this study was to compare the performance of segmentation methods on simulated soil aggregate images with various porosities as scenarios of the ground-truth standards. Simulations of the soil aggregate images were conducted on pore and solid spaces respectively. For the pore space, taking into consideration of partial volume and other pronounced artifacts, several layers of the pores at different scales were created and overlaid and random Gaussian noises were added. For the solid space, LU decomposition technique on a Gaussian random field with a specified mean and covariance structure was applied on a conditional data set of the known pore space. Several different kinds of segmentation methods, namely, entropy-based methods, indicator kriging methods and clustering methods, were examined and compared based on thresholding criterion such as non-uniformity measure and misclassification error. Majority filtering was applied to smooth the resulting images. We found that clustering methods uniformly outperformed two other methods, especially in the relatively low porosity cases. Moreover, the indicator kriging method performs better in high porosity cases, however, its

  17. A novel multiphoton microscopy images segmentation method based on superpixel and watershed.

    Science.gov (United States)

    Wu, Weilin; Lin, Jinyong; Wang, Shu; Li, Yan; Liu, Mingyu; Liu, Gaoqiang; Cai, Jianyong; Chen, Guannan; Chen, Rong

    2017-04-01

    Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods.

    Directory of Open Access Journals (Sweden)

    Pall Jens Reynisson

    Full Text Available Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface.CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium, which provides a Basic module and a Pulmonology module (beta version (MPM, OsiriX (Pixmeo, Geneva, Switzerland and our Tube Segmentation Framework (TSF method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model.The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM and the Mimics Basic Module (MBM resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the airways and the

  19. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods.

    Science.gov (United States)

    Reynisson, Pall Jens; Scali, Marta; Smistad, Erik; Hofstad, Erlend Fagertun; Leira, Håkon Olav; Lindseth, Frank; Nagelhus Hernes, Toril Anita; Amundsen, Tore; Sorger, Hanne; Langø, Thomas

    2015-01-01

    Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface. CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium), which provides a Basic module and a Pulmonology module (beta version) (MPM), OsiriX (Pixmeo, Geneva, Switzerland) and our Tube Segmentation Framework (TSF) method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model. The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM) and the Mimics Basic Module (MBM) resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the airways and the centerlines

  20. Wavelet-based methods for the analysis of fMRI time series

    NARCIS (Netherlands)

    Wink, Alle Meije

    2004-01-01

    De term functional neuroimaging wordt gebruikt voor het vakgebied dat moderne beeldvormende technieken, zoals magnetische resonantie imaging (MRI), positron emissietomografie (PET) en electro-encefalografie (EEG), gebruikt om neurale processen zichtbaar te maken. Binnen dit vakgebied speelt

  1. 3D Inversion of Magnetic Data through Wavelet based Regularization Method

    Directory of Open Access Journals (Sweden)

    Maysam Abedi

    2015-06-01

    Full Text Available This study deals with the 3D recovering of magnetic susceptibility model by incorporating the sparsity-based constraints in the inversion algorithm. For this purpose, the area under prospect was divided into a large number of rectangular prisms in a mesh with unknown susceptibilities. Tikhonov cost functions with two sparsity functions were used to recover the smooth parts as well as the sharp boundaries of model parameters. A pre-selected basis namely wavelet can recover the region of smooth behaviour of susceptibility distribution while Haar or finite-difference (FD domains yield a solution with rough boundaries. Therefore, a regularizer function which can benefit from the advantages of both wavelets and Haar/FD operators in representation of the 3D magnetic susceptibility distributionwas chosen as a candidate for modeling magnetic anomalies. The optimum wavelet and parameter β which controls the weight of the two sparsifying operators were also considered. The algorithm assumed that there was no remanent magnetization and observed that magnetometry data represent only induced magnetization effect. The proposed approach is applied to a noise-corrupted synthetic data in order to demonstrate its suitability for 3D inversion of magnetic data. On obtaining satisfactory results, a case study pertaining to the ground based measurement of magnetic anomaly over a porphyry-Cu deposit located in Kerman providence of Iran. Now Chun deposit was presented to be 3D inverted. The low susceptibility in the constructed model coincides with the known location of copper ore mineralization.

  2. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    Directory of Open Access Journals (Sweden)

    Xin Yuan

    2016-07-01

    Full Text Available The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM. According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS and forward-looking sonar (FLS images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF, includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop

  3. Analytic Study of the Tadoma Method: Effects on Hand Position of Segmental Speech Perception.

    Science.gov (United States)

    Reed, Charlotte M.; And Others

    1989-01-01

    Small-set segmental identification experiments were conducted with three deaf-blind subjects who were highly experienced users of the Tadoma method. Systematic variations in the positioning of the hand on the speaker's face for Tadoma produced systematic effects on percent-correct scores, information transfer, and perception of individual…

  4. An Evaluation of Research Replication with Q Method and Its Utility in Market Segmentation.

    Science.gov (United States)

    Adams, R. C.

    Precipitated by questions of using Q methodology in television market segmentation and of the replicability of such research, this paper reports on both a reexamination of 1968 research by Joseph M. Foley and an attempt to replicate Foley's study. By undertaking a reanalysis of the Foley data, the question of replication in Q method is addressed.…

  5. Level set method coupled with Energy Image features for brain MR image segmentation.

    Science.gov (United States)

    Punga, Mirela Visan; Gaurav, Rahul; Moraru, Luminita

    2014-06-01

    Up until now, the noise and intensity inhomogeneity are considered one of the major drawbacks in the field of brain magnetic resonance (MR) image segmentation. This paper introduces the energy image feature approach for intensity inhomogeneity correction. Our approach of segmentation takes the advantage of image features and preserves the advantages of the level set methods in region-based active contours framework. The energy image feature represents a new image obtained from the original image when the pixels' values are replaced by local energy values computed in the 3×3 mask size. The performance and utility of the energy image features were tested and compared through two different variants of level set methods: one as the encompassed local and global intensity fitting method and the other as the selective binary and Gaussian filtering regularized level set method. The reported results demonstrate the flexibility of the energy image feature to adapt to level set segmentation framework and to perform the challenging task of brain lesion segmentation in a rather robust way.

  6. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    Science.gov (United States)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  7. AN AUTOMATIC METHOD FOR GEOMETRIC SEGMENTATION OF MASONRY ARCH BRIDGES FOR STRUCTURAL ENGINEERING PURPOSES

    Directory of Open Access Journals (Sweden)

    B. Riveiro

    2016-06-01

    Full Text Available Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc. of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  8. Robust segmentation methods with an application to aortic pulse wave velocity calculation.

    Science.gov (United States)

    Babin, Danilo; Devos, Daniel; Pižurica, Aleksandra; Westenberg, Jos; Vansteenkiste, Ewout; Philips, Wilfried

    2014-04-01

    Aortic stiffness has proven to be an important diagnostic and prognostic factor of many cardiovascular diseases, as well as an estimate of overall cardiovascular health. Pulse wave velocity (PWV) represents a good measure of the aortic stiffness, while the aortic distensibility is used as an aortic elasticity index. Obtaining the PWV and the aortic distensibility from magnetic resonance imaging (MRI) data requires diverse segmentation tasks, namely the extraction of the aortic center line and the segmentation of aortic regions, combined with signal processing methods for the analysis of the pulse wave. In our study non-contrasted MRI images of abdomen were used in healthy volunteers (22 data sets) for the sake of non-invasive analysis and contrasted magnetic resonance (MR) images were used for the aortic examination of Marfan syndrome patients (8 data sets). In this research we present a novel robust segmentation technique for the PWV and aortic distensibility calculation as a complete image processing toolbox. We introduce a novel graph-based method for the centerline extraction of a thoraco-abdominal aorta for the length calculation from 3-D MRI data, robust to artifacts and noise. Moreover, we design a new projection-based segmentation method for transverse aortic region delineation in cardiac magnetic resonance (CMR) images which is robust to high presence of artifacts. Finally, we propose a novel method for analysis of velocity curves in order to obtain pulse wave propagation times. In order to validate the proposed method we compare the obtained results with manually determined aortic centerlines and a region segmentation by an expert, while the results of the PWV measurement were compared to a validated software (LUMC, Leiden, the Netherlands). The obtained results show high correctness and effectiveness of our method for the aortic PWV and distensibility calculation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A Combinational Clustering Based Method for cDNA Microarray Image Segmentation.

    Science.gov (United States)

    Shao, Guifang; Li, Tiejun; Zuo, Wangda; Wu, Shunxiang; Liu, Tundong

    2015-01-01

    Microarray technology plays an important role in drawing useful biological conclusions by analyzing thousands of gene expressions simultaneously. Especially, image analysis is a key step in microarray analysis and its accuracy strongly depends on segmentation. The pioneering works of clustering based segmentation have shown that k-means clustering algorithm and moving k-means clustering algorithm are two commonly used methods in microarray image processing. However, they usually face unsatisfactory results because the real microarray image contains noise, artifacts and spots that vary in size, shape and contrast. To improve the segmentation accuracy, in this article we present a combination clustering based segmentation approach that may be more reliable and able to segment spots automatically. First, this new method starts with a very simple but effective contrast enhancement operation to improve the image quality. Then, an automatic gridding based on the maximum between-class variance is applied to separate the spots into independent areas. Next, among each spot region, the moving k-means clustering is first conducted to separate the spot from background and then the k-means clustering algorithms are combined for those spots failing to obtain the entire boundary. Finally, a refinement step is used to replace the false segmentation and the inseparable ones of missing spots. In addition, quantitative comparisons between the improved method and the other four segmentation algorithms--edge detection, thresholding, k-means clustering and moving k-means clustering--are carried out on cDNA microarray images from six different data sets. Experiments on six different data sets, 1) Stanford Microarray Database (SMD), 2) Gene Expression Omnibus (GEO), 3) Baylor College of Medicine (BCM), 4) Swiss Institute of Bioinformatics (SIB), 5) Joe DeRisi's individual tiff files (DeRisi), and 6) University of California, San Francisco (UCSF), indicate that the improved approach is

  10. A Combinational Clustering Based Method for cDNA Microarray Image Segmentation.

    Directory of Open Access Journals (Sweden)

    Guifang Shao

    Full Text Available Microarray technology plays an important role in drawing useful biological conclusions by analyzing thousands of gene expressions simultaneously. Especially, image analysis is a key step in microarray analysis and its accuracy strongly depends on segmentation. The pioneering works of clustering based segmentation have shown that k-means clustering algorithm and moving k-means clustering algorithm are two commonly used methods in microarray image processing. However, they usually face unsatisfactory results because the real microarray image contains noise, artifacts and spots that vary in size, shape and contrast. To improve the segmentation accuracy, in this article we present a combination clustering based segmentation approach that may be more reliable and able to segment spots automatically. First, this new method starts with a very simple but effective contrast enhancement operation to improve the image quality. Then, an automatic gridding based on the maximum between-class variance is applied to separate the spots into independent areas. Next, among each spot region, the moving k-means clustering is first conducted to separate the spot from background and then the k-means clustering algorithms are combined for those spots failing to obtain the entire boundary. Finally, a refinement step is used to replace the false segmentation and the inseparable ones of missing spots. In addition, quantitative comparisons between the improved method and the other four segmentation algorithms--edge detection, thresholding, k-means clustering and moving k-means clustering--are carried out on cDNA microarray images from six different data sets. Experiments on six different data sets, 1 Stanford Microarray Database (SMD, 2 Gene Expression Omnibus (GEO, 3 Baylor College of Medicine (BCM, 4 Swiss Institute of Bioinformatics (SIB, 5 Joe DeRisi's individual tiff files (DeRisi, and 6 University of California, San Francisco (UCSF, indicate that the improved

  11. Normal Vector Projection Method used for Convex Optimization of Chan-Vese Model for Image Segmentation

    Science.gov (United States)

    Wei, W. B.; Tan, L.; Jia, M. Q.; Pan, Z. K.

    2017-01-01

    The variational level set method is one of the main methods of image segmentation. Due to signed distance functions as level sets have to keep the nature of the functions through numerical remedy or additional technology in an evolutionary process, it is not very efficient. In this paper, a normal vector projection method for image segmentation using Chan-Vese model is proposed. An equivalent formulation of Chan-Vese model is used by taking advantage of property of binary level set functions and combining with the concept of convex relaxation. Threshold method and projection formula are applied in the implementation. It can avoid the above problems and obtain a global optimal solution. Experimental results on both synthetic and real images validate the effects of the proposed normal vector projection method, and show advantages over traditional algorithms in terms of computational efficiency.

  12. Segmentation Method for Magnetic Resonance-Guided High-Intensity Focused Ultrasound Therapy Planning

    Directory of Open Access Journals (Sweden)

    A. Vargas-Olivares

    2017-01-01

    Full Text Available High-intensity focused ultrasound (HIFU is a minimally invasive therapy modality in which ultrasound beams are concentrated at a focal region, producing a rise of temperature and selective ablation within the focal volume and leaving surrounding tissues intact. HIFU has been proposed for the safe ablation of both malignant and benign tissues and as an agent for drug delivery. Magnetic resonance imaging (MRI has been proposed as guidance and monitoring method for the therapy. The identification of regions of interest is a crucial procedure in HIFU therapy planning. This procedure is performed in the MR images. The purpose of the present research work is to implement a time-efficient and functional segmentation scheme, based on the watershed segmentation algorithm, for the MR images used for the HIFU therapy planning. The achievement of a segmentation process with functional results is feasible, but preliminary image processing steps are required in order to define the markers for the segmentation algorithm. Moreover, the segmentation scheme is applied in parallel to an MR image data set through the use of a thread pool, achieving a near real-time execution and making a contribution to solve the time-consuming problem of the HIFU therapy planning.

  13. Yet Another Method for Image Segmentation based on Histograms and Heuristics

    Directory of Open Access Journals (Sweden)

    Horia-Nicolai L. Teodorescu

    2012-07-01

    Full Text Available We introduce a method for image segmentation that requires little computations, yet providing comparable results to other methods. While the proposed method resembles to the known ones based on histograms, it is still different in the use of the gray level distribution. When to the basic procedure we add several heuristic rules, the method produces results that, in some cases, may outperform the results produced by the known methods. The paper reports preliminary results. More details on the method, improvements, and results will be presented in a future paper.

  14. A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression.

    Science.gov (United States)

    Shi, Yinghuan; Gao, Yaozong; Liao, Shu; Zhang, Daoqiang; Gao, Yang; Shen, Dinggang

    2016-01-15

    In 1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.

  15. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  16. Supervised segmentation of phenotype descriptions for the human skeletal phenome using hybrid methods

    Directory of Open Access Journals (Sweden)

    Groza Tudor

    2012-10-01

    Full Text Available Abstract Background Over the course of the last few years there has been a significant amount of research performed on ontology-based formalization of phenotype descriptions. In order to fully capture the intrinsic value and knowledge expressed within them, we need to take advantage of their inner structure, which implicitly combines qualities and anatomical entities. The first step in this process is the segmentation of the phenotype descriptions into their atomic elements. Results We present a two-phase hybrid segmentation method that combines a series individual classifiers using different aggregation schemes (set operations and simple majority voting. The approach is tested on a corpus comprised of skeletal phenotype descriptions emerged from the Human Phenotype Ontology. Experimental results show that the best hybrid method achieves an F-Score of 97.05% in the first phase and F-Scores of 97.16% / 94.50% in the second phase. Conclusions The performance of the initial segmentation of anatomical entities and qualities (phase I is not affected by the presence / absence of external resources, such as domain dictionaries. From a generic perspective, hybrid methods may not always improve the segmentation accuracy as they are heavily dependent on the goal and data characteristics.

  17. Quantitative assessment in thermal image segmentation for artistic objects

    Science.gov (United States)

    Yousefi, Bardia; Sfarra, Stefano; Maldague, Xavier P. V.

    2017-07-01

    The application of the thermal and infrared technology in different areas of research is considerably increasing. These applications involve Non-destructive Testing (NDT), Medical analysis (Computer Aid Diagnosis/Detection- CAD), Arts and Archaeology among many others. In the arts and archaeology field, infrared technology provides significant contributions in term of finding defects of possible impaired regions. This has been done through a wide range of different thermographic experiments and infrared methods. The proposed approach here focuses on application of some known factor analysis methods such as standard Non-Negative Matrix Factorization (NMF) optimized by gradient-descent-based multiplicative rules (SNMF1) and standard NMF optimized by Non-negative least squares (NNLS) active-set algorithm (SNMF2) and eigen decomposition approaches such as Principal Component Thermography (PCT), Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT) to obtain the thermal features. On one hand, these methods are usually applied as preprocessing before clustering for the purpose of segmentation of possible defects. On the other hand, a wavelet based data fusion combines the data of each method with PCT to increase the accuracy of the algorithm. The quantitative assessment of these approaches indicates considerable segmentation along with the reasonable computational complexity. It shows the promising performance and demonstrated a confirmation for the outlined properties. In particular, a polychromatic wooden statue and a fresco were analyzed using the above mentioned methods and interesting results were obtained.

  18. A 3-D Active Contour Method for Automated Segmentation of the Left Ventricle From Magnetic Resonance Images.

    Science.gov (United States)

    Hajiaghayi, Mahdi; Groves, Elliott M; Jafarkhani, Hamid; Kheradvar, Arash

    2017-01-01

    This study's objective is to develop and validate a fast automated 3-D segmentation method for cardiac magnetic resonance imaging (MRI). The segmentation algorithm automatically reconstructs cardiac MRI DICOM data into a 3-D model (i.e., direct volumetric segmentation), without relying on prior statistical knowledge. A novel 3-D active contour method was employed to detect the left ventricular cavity in 33 subjects with heterogeneous heart diseases from the York University database. Papillary muscles were identified and added to the chamber using a convex hull of the left ventricle and interpolation. The myocardium was then segmented using a similar 3-D segmentation method according to anatomic information. A multistage approach was taken to determine the method's efficacy. Our method demonstrated a significant improvement in segmentation performance when compared to manual segmentation and other automated methods. A true 3-D reconstruction technique without the need for training datasets or any user-driven segmentation has been developed. In this method, a novel combination of internal and external energy terms for active contour was utilized that exploits histogram matching for improving the segmentation performance. This method takes advantage of full volumetric imaging, does not rely on prior statistical knowledge, and employs a convex-hull interpolation to include the papillary muscles.

  19. Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method

    Directory of Open Access Journals (Sweden)

    Guannan Chen

    2017-01-01

    Full Text Available As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS model with the local intensity property introduced by the Local Binary fitting (LBF model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction and STARE (Structured Analysis of the Retina (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets. The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods.

  20. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  1. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  2. A Combined Method for Segmentation and Registration for an Advanced and Progressive Evaluation of Thermal Images

    OpenAIRE

    Barcelos, Emilio Z.; Walmir M. Caminhas; Eraldo Ribeiro; Pimenta, Eduardo M.; Palhares,Reinaldo M.

    2014-01-01

    In this paper, a method that combines image analysis techniques, such as segmentation and registration, is proposed for an advanced and progressive evaluation of thermograms. The method is applied for the prevention of muscle injury in high-performance athletes, in collaboration with a Brazilian professional soccer club. The goal is to produce information on spatio-temporal variations of thermograms favoring the investigation of the athletes’ conditions along the competition. The proposed met...

  3. A sport scene images segmentation method based on edge detection algorithm

    Science.gov (United States)

    Chen, Biqing

    2011-12-01

    This paper proposes a simple, fast sports scene image segmentation method; a lot of work so far has been looking for a way to reduce the different shades of emotions in smooth area. A novel method of pretreatment, proposed the elimination of different shades feelings. Internal filling mechanism is used to change the pixels enclosed by the interest as interest pixels. For some test has achieved harvest sports scene images has been confirmed.

  4. Live level set: A hybrid method of livewire and level set for medical image segmentation

    OpenAIRE

    Yao, Jianhua; Chen, David

    2008-01-01

    Livewire and level set are popular methods for medical image segmentation. In this article, the authors propose a hybrid method of livewire and level set, termed the live level set (LLS). The LLS replaces the one graph update iteration in the classic livewire with two iterations of graph updates. The first iteration generates an initial contour for a level set computation. The level set distance is then factored back into the cost function in the second iteration of graph update. The authors ...

  5. Pigmented skin lesion detection using random forest and wavelet-based texture

    Science.gov (United States)

    Hu, Ping; Yang, Tie-jun

    2016-10-01

    The incidence of cutaneous malignant melanoma, a disease of worldwide distribution and is the deadliest form of skin cancer, has been rapidly increasing over the last few decades. Because advanced cutaneous melanoma is still incurable, early detection is an important step toward a reduction in mortality. Dermoscopy photographs are commonly used in melanoma diagnosis and can capture detailed features of a lesion. A great variability exists in the visual appearance of pigmented skin lesions. Therefore, in order to minimize the diagnostic errors that result from the difficulty and subjectivity of visual interpretation, an automatic detection approach is required. The objectives of this paper were to propose a hybrid method using random forest and Gabor wavelet transformation to accurately differentiate which part belong to lesion area and the other is not in a dermoscopy photographs and analyze segmentation accuracy. A random forest classifier consisting of a set of decision trees was used for classification. Gabor wavelets transformation are the mathematical model of visual cortical cells of mammalian brain and an image can be decomposed into multiple scales and multiple orientations by using it. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain. Texture features based on Gabor wavelets transformation are found by the Gabor filtered image. Experiment results indicate the following: (1) the proposed algorithm based on random forest outperformed the-state-of-the-art in pigmented skin lesions detection (2) and the inclusion of Gabor wavelet transformation based texture features improved segmentation accuracy significantly.

  6. An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image.

    Science.gov (United States)

    Qian, Chunjun; Yang, Xiaoping

    2018-01-01

    Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. Our proposed learning-based integrated framework investigated in this study could be useful for

  7. Computational methods for the image segmentation of pigmented skin lesions: A review.

    Science.gov (United States)

    Oliveira, Roberta B; Filho, Mercedes E; Ma, Zhen; Papa, João P; Pereira, Aledir S; Tavares, João Manuel R S

    2016-07-01

    Because skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation. Techniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle. The techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results. The image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    Science.gov (United States)

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  9. Automatic white matter lesion segmentation using an adaptive outlier detection method.

    Science.gov (United States)

    Ong, Kok Haur; Ramachandram, Dhanesh; Mandava, Rajeswari; Shuaib, Ibrahim Lutfi

    2012-07-01

    White matter (WM) lesions are diffuse WM abnormalities that appear as hyperintense (bright) regions in cranial magnetic resonance imaging (MRI). WM lesions are often observed in older populations and are important indicators of stroke, multiple sclerosis, dementia and other brain-related disorders. In this paper, a new automated method for WM lesions segmentation is presented. In the proposed method, the presence of WM lesions is detected as outliers in the intensity distribution of the fluid-attenuated inversion recovery (FLAIR) MR images using an adaptive outlier detection approach. Outliers are detected using a novel adaptive trimmed mean algorithm and box-whisker plot. In addition, pre- and postprocessing steps are implemented to reduce false positives attributed to MRI artifacts commonly observed in FLAIR sequences. The approach is validated using the cranial MRI sequences of 38 subjects. A significant correlation (R=0.9641, P value=3.12×10(-3)) is observed between the automated approach and manual segmentation by radiologist. The accuracy of the proposed approach was further validated by comparing the lesion volumes computed using the automated approach and lesions manually segmented by an expert radiologist. Finally, the proposed approach is compared against leading lesion segmentation algorithms using a benchmark dataset. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.

  10. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  11. CT Metal Artifact Reduction Method Based on Improved Image Segmentation and Sinogram In-Painting

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2012-01-01

    Full Text Available The streak artifacts caused by metal implants degrade the image quality and limit the applications of CT imaging. The standard method used to reduce these metallic artifacts often consists of interpolating the missing projection data but the result is often a loss of image quality with additional artifacts in the whole image. This paper proposes a new strategy based on a three-stage process: (1 the application of a large-scale non local means filter (LS-NLM to suppress the noise and enhance the original CT image, (2 the segmentation of metal artifacts and metallic objects using a mutual information maximized segmentation algorithm (MIMS, (3 a modified exemplar-based in-painting technique to restore the corrupted projection data in sinogram. The final corrected image is then obtained by merging the segmented metallic object image with the filtered back-projection (FBP reconstructed image from the in-painted sinogram. Quantitative and qualitative experiments have been conducted on both a simulated phantom and clinical CT images and a comparative study has been led with Bal's algorithm that proposed a similar segmentation-based method.

  12. A fragment based method for modeling of protein segments into cryo-EM density maps.

    Science.gov (United States)

    Ismer, Jochen; Rose, Alexander S; Tiemann, Johanna K S; Hildebrand, Peter W

    2017-11-13

    Single-particle analysis of electron cryo-microscopy (cryo-EM) is a key technology for elucidation of macromolecular structures. Recent technical advances in hardware and software developments significantly enhanced the resolution of cryo-EM density maps and broadened the applicability and the circle of users. To facilitate modeling of macromolecules into cryo-EM density maps, fast and easy to use methods for modeling are now demanded. Here we investigated and benchmarked the suitability of a classical and well established fragment-based approach for modeling of segments into cryo-EM density maps (termed FragFit). FragFit uses a hierarchical strategy to select fragments from a pre-calculated set of billions of fragments derived from structures deposited in the Protein Data Bank, based on sequence similarly, fit of stem atoms and fit to a cryo-EM density map. The user only has to specify the sequence of the segment and the number of the N- and C-terminal stem-residues in the protein. Using a representative data set of protein structures, we show that protein segments can be accurately modeled into cryo-EM density maps of different resolution by FragFit. Prediction quality depends on segment length, the type of secondary structure of the segment and local quality of the map. Fast and automated calculation of FragFit renders it applicable for implementation of interactive web-applications e.g. to model missing segments, flexible protein parts or hinge-regions into cryo-EM density maps.

  13. A deep convolutional feature based learning layer-specific edges method for segmenting OCT image

    Science.gov (United States)

    Fu, Tianyu; Liu, Xiaoming; Liu, Dong; Yang, Zhou

    2017-07-01

    Optical coherence tomography (OCT) is a high resolution and non-invasive imaging modality that has become one of the most prevalent techniques for ophthalmic diagnostic. However, manual segmentation is often a time-consuming and subjective process. In this work, we present a new method for retinal layer segmentation in retinal optical coherence tomography images, which uses a deep convolutional feature to train a structured random forest classifier. The experimental results show that our method achieves good results with the mean distance error of 1.45 pixels whereas that of the state-of-the-art was 1.68 pixels, and achieve a F-score of 0.86 which is also better than 0.83 that is obtained by the state-of-the-art method.

  14. Overview of post Cohen-Boyer methods for single segment cloning and for multisegment DNA assembly

    Science.gov (United States)

    Sands, Bryan; Brent, Roger

    2016-01-01

    In 1973, Cohen and coworkers published a foundational paper describing the cloning of DNA fragments into plasmid vectors. In it, they used DNA segments made by digestion with restriction enzymes and joined these in vitro with DNA ligase. These methods established working recombinant DNA technology and enabled the immediate start of the biotechnology industry. Since then, “classical” recombinant DNA technology using restriction enzymes and DNA ligase has matured. At the same time, researchers have developed numerous ways to generate large, complex, multisegment DNA constructions that offer advantages over classical techniques. Here, we provide an overview of “post-Cohen-Boyer” techniques used for cloning single segments into vectors (T/A, Topo cloning, Gateway and Recombineering) and for multisegment DNA assembly (Biobricks, Golden Gate, Gibson, Yeast homologous recombination in vivo, and Ligase Cycling Reaction). We compare and contrast these methods and also discuss issues that researchers should consider before choosing a particular multisegment DNA assembly method. PMID:27152131

  15. A Combined Method for Segmentation and Registration for an Advanced and Progressive Evaluation of Thermal Images

    Directory of Open Access Journals (Sweden)

    Emilio Z. Barcelos

    2014-11-01

    Full Text Available In this paper, a method that combines image analysis techniques, such as segmentation and registration, is proposed for an advanced and progressive evaluation of thermograms. The method is applied for the prevention of muscle injury in high-performance athletes, in collaboration with a Brazilian professional soccer club. The goal is to produce information on spatio-temporal variations of thermograms favoring the investigation of the athletes’ conditions along the competition. The proposed method improves on current practice by providing a means for automatically detecting adaptive body-shaped regions of interest, instead of the manual selection of simple shapes. Specifically, our approach combines the optimization features in Otsu’s method with a correction factor and post-processing techniques, enhancing thermal-image segmentation when compared to other methods. Additional contributions resulting from the combination of the segmentation and registration steps of our approach are the progressive analyses of thermograms in a unique spatial coordinate system and the accurate extraction of measurements and isotherms.

  16. A combined method for segmentation and registration for an advanced and progressive evaluation of thermal images.

    Science.gov (United States)

    Barcelos, Emilio Z; Caminhas, Walmir M; Ribeiro, Eraldo; Pimenta, Eduardo M; Palhares, Reinaldo M

    2014-11-19

    In this paper, a method that combines image analysis techniques, such as segmentation and registration, is proposed for an advanced and progressive evaluation of thermograms. The method is applied for the prevention of muscle injury in high-performance athletes, in collaboration with a Brazilian professional soccer club. The goal is to produce information on spatio-temporal variations of thermograms favoring the investigation of the athletes' conditions along the competition. The proposed method improves on current practice by providing a means for automatically detecting adaptive body-shaped regions of interest, instead of the manual selection of simple shapes. Specifically, our approach combines the optimization features in Otsu's method with a correction factor and post-processing techniques, enhancing thermal-image segmentation when compared to other methods. Additional contributions resulting from the combination of the segmentation and registration steps of our approach are the progressive analyses of thermograms in a unique spatial coordinate system and the accurate extraction of measurements and isotherms.

  17. Segmentation of rodent whole-body dynamic PET images: an unsupervised method based on voxel dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Maroy, R.; Boisgard, R.; Comtat, C.; Dolle, F.; Trebossen, R.; Tavitian, B. [CEA/I2BM/SHFJ/LIME, F-91402 Orsay (France); Frouin, V. [CEA, DSV, DRR, DGF, F-91057 Evry (France); Cathier, P.; Duchesnay, E. [CEA/I2BM/Neurospin/LNAO, F-91190 Saclay (France); D; Nielsen, P.E. [Faculty of Health Sciences, University of Copenhagen, DK-2200 Copenhagen (Denmark)

    2008-07-01

    Positron emission tomography (PET) is a useful tool for pharmacokinetics studies in rodents during the preclinical phase of drug and tracer development. However, rodent organs are small as compared to the scanner's intrinsic resolution and are affected by physiological movements. We present a new method for the segmentation of rodent whole-body PET images that takes these two difficulties into account by estimating the pharmacokinetics far from organ borders. The segmentation method proved efficient on whole-body numerical rat phantom simulations, including 3-14 organs, together with physiological movements (heart beating, breathing, and bladder filling). The method was resistant to spillover and physiological movements, while other methods failed to obtain a correct segmentation. The radioactivity concentrations calculated with this method also showed an excellent correlation with the manual delineation of organs in a large set of preclinical images. In addition, it was faster, detected more organs, and extracted organs' mean time activity curves with a better confidence on the measure than manual delineation. (authors)

  18. Evaluation of PET volume segmentation methods: comparisons with expert manual delineations.

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Yeni, Nathanaëlle; Petyt, Grégory; Verscheure, Leslie; Huglo, Damien; Béron, Amandine; Adib, Salim; Lion, Georges; Vermandel, Maximilien

    2012-01-01

    [¹⁸F]-Fluorodeoxyglucose PET has become an essential technique in oncology. Accurate segmentation is important for treatment planning. With the increasing number of available methods, it will be useful to establish a reliable evaluation tool. Five methods for [F]-fluorodeoxyglucose PET image segmentation (MIP-based, Fuzzy C-means, Daisne, Nestle and the 42% threshold-based approach) were evaluated on non-Hodgkin's lymphoma lesions by comparing them with manual delineations performed by a panel of experts. The results were analyzed using different similarity measures. Intraoperator and interoperator variabilities were also studied. The maximum of intensity projection-based method provided results closest to the manual delineations set [binary Jaccard index mean (SD) 0.45 (0.15)]. The fuzzy C-means algorithm yielded slightly less satisfactory results. The application of a 42% threshold-based approach yielded results furthest from the manual delineations [binary Jaccard index mean (SD) 0.38 (0.16)]; the Daisne and the Nestle methods yielded intermediate results. Important intraoperator and interoperator variabilities were demonstrated. A simple assessment framework based on comparisons with manual delineations was proposed. The use of a set of manual delineations performed by five different experts as the reference seemed to be suitable to take the intraoperator and the interoperator variabilities into account. The online distribution of the data set generated in this study will make it possible to evaluate any new segmentation method.

  19. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets

    Directory of Open Access Journals (Sweden)

    Giuly Richard J

    2012-02-01

    Full Text Available Abstract Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to

  20. Evaluating accuracy of striatal, pallidal, and thalamic segmentation methods: Comparing automated approaches to manual delineation.

    Science.gov (United States)

    Makowski, Carolina; Béland, Sophie; Kostopoulos, Penelope; Bhagwat, Nikhil; Devenyi, Gabriel A; Malla, Ashok K; Joober, Ridha; Lepage, Martin; Chakravarty, M Mallar

    2017-03-01

    Accurate automated quantification of subcortical structures is a greatly pursued endeavour in neuroimaging. In an effort to establish the validity and reliability of these methods in defining the striatum, globus pallidus, and thalamus, we investigated differences in volumetry between manual delineation and automated segmentations derived by widely used FreeSurfer and FSL packages, and a more recent segmentation method, the MAGeT-Brain algorithm. In a first set of experiments, the basal ganglia and thalamus of thirty subjects (15 first episode psychosis [FEP], 15 controls) were manually defined and compared to the labels generated by the three automated methods. Our results suggest that all methods overestimate volumes compared to the manually derived "gold standard", with the least pronounced differences produced using MAGeT. The least between-method variability was noted for the striatum, whereas marked differences between manual segmentation and MAGeT compared to FreeSurfer and FSL emerged for the globus pallidus and thalamus. Correlations between manual segmentation and automated methods were strongest for MAGeT (range: 0.51 to 0.92; pmanual labels and automated methods at the lower end of the distribution (i.e. smaller structures), which was most prominent for bilateral thalamus across automated pipelines, and left globus pallidus for FSL. We then went on to examine volume and shape of the basal ganglia structures using automated techniques in 135 FEP patients and 88 controls. The striatum and globus pallidus were significantly larger in FEP patients compared to controls bilaterally, irrespective of the method used. MAGeT-Brain was more sensitive to shape-based group differences, and uncovered widespread surface expansions in the striatum and globus pallidus bilaterally in FEP patients compared to controls, and surface contractions in bilateral thalamus (FDR-corrected). By contrast, after using a recommended cluster-wise thresholding method, FSL only detected

  1. An effective method for computerized prediction and segmentation of multiple sclerosis lesions in brain MRI.

    Science.gov (United States)

    Roy, Sudipta; Bhattacharyya, Debnath; Bandyopadhyay, Samir Kumar; Kim, Tai-Hoon

    2017-03-01

    Multiple sclerosis is one of the major diseases and the progressive MS lesion formation often leads to cognitive decline and physical disability. A quick and perfect method for estimating the number and size of MS lesions in the brain is very important in estimating the progress of the disease and effectiveness of treatments. But, the accurate identification, characterization and quantification of MS lesions in brain magnetic resonance imaging (MRI) is extremely difficult due to the frequent change in location, size, morphology variation, intensity similarity with normal brain tissues, and inter-subject anatomical variation of brain images. This paper presents a method where adaptive background generation and binarization using global threshold are the key steps for MS lesions detection and segmentation. After performing three phase level set, we add third phase segmented region with contour of brain to connect the normal tissues near the boundary. Then remove all lesions except maximum connected area and corpus callosum of the brain to generate adaptive background. The binarization method is used to select threshold based on entropy and standard deviation preceded by non-gamut image enhancement. The background image is then subtracted from binarized image to find out segmented MS lesions. The step of subtraction of background from binarized image does not generate spurious lesions. Binarization steps correctly identify the MS lesions and reduce over or under segmentation. The average Kappa index is 94.88%, Jacard index is 90.43%, correct detection ration is 92.60284%, false detection ratio is 2.55% and relative area error is 5.97% for proposed method. Existing recent methods does not have such accuracy and low value of error rate both mathematically as well as visually due to many spurious lesions generation and over segmentation problems. Proposed method accurately identifies the size and number of lesions as well as location of lesions detection as a radiologist

  2. A Fast Level Set Method for Synthetic Aperture Radar Ocean Image Segmentation

    Science.gov (United States)

    Huang, Xiaoxia; Huang, Bo; Li, Hongga

    2009-01-01

    Segmentation of high noise imagery like Synthetic Aperture Radar (SAR) images is still one of the most challenging tasks in image processing. While level set, a novel approach based on the analysis of the motion of an interface, can be used to address this challenge, the cell-based iterations may make the process of image segmentation remarkably slow, especially for large-size images. For this reason fast level set algorithms such as narrow band and fast marching have been attempted. Built upon these, this paper presents an improved fast level set method for SAR ocean image segmentation. This competent method is dependent on both the intensity driven speed and curvature flow that result in a stable and smooth boundary. Notably, it is optimized to track moving interfaces for keeping up with the point-wise boundary propagation using a single list and a method of fast up-wind scheme iteration. The list facilitates efficient insertion and deletion of pixels on the propagation front. Meanwhile, the local up-wind scheme is used to update the motion of the curvature front instead of solving partial differential equations. Experiments have been carried out on extraction of surface slick features from ERS-2 SAR images to substantiate the efficacy of the proposed fast level set method. PMID:22399940

  3. A physics-based intravascular ultrasound image reconstruction method for lumen segmentation.

    Science.gov (United States)

    Mendizabal-Ruiz, Gerardo; Kakadiaris, Ioannis A

    2016-08-01

    Intravascular ultrasound (IVUS) refers to the medical imaging technique consisting of a miniaturized ultrasound transducer located at the tip of a catheter that can be introduced in the blood vessels providing high-resolution, cross-sectional images of their interior. Current methods for the generation of an IVUS image reconstruction from radio frequency (RF) data do not account for the physics involved in the interaction between the IVUS ultrasound signal and the tissues of the vessel. In this paper, we present a novel method to generate an IVUS image reconstruction based on the use of a scattering model that considers the tissues of the vessel as a distribution of three-dimensional point scatterers. We evaluated the impact of employing the proposed IVUS image reconstruction method in the segmentation of the lumen/wall interface on 40MHz IVUS data using an existing automatic lumen segmentation method. We compared the results with those obtained using the B-mode reconstruction on 600 randomly selected frames from twelve pullback sequences acquired from rabbit aortas and different arteries of swine. Our results indicate the feasibility of employing the proposed IVUS image reconstruction for the segmentation of the lumen. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Texture based segmentation method to detect atherosclerotic plaque from optical tomography images

    Science.gov (United States)

    Prakash, Ammu; Hewko, Mark; Sowa, Michael; Sherif, Sherif

    2013-06-01

    Optical coherence tomography (OCT) imaging has been widely employed in assessing cardiovascular disease. Atherosclerosis is one of the major cause cardio vascular diseases. However visual detection of atherosclerotic plaque from OCT images is often limited and further complicated by high frame rates. We developed a texture based segmentation method to automatically detect plaque and non plaque regions from OCT images. To verify our results we compared them to photographs of the vascular tissue with atherosclerotic plaque that we used to generate the OCT images. Our results show a close match with photographs of vascular tissue with atherosclerotic plaque. Our texture based segmentation method for plaque detection could be potentially used in clinical cardiovascular OCT imaging for plaque detection.

  5. PETSTEP: Generation of synthetic PET lesions for fast evaluation of segmentation methods

    Science.gov (United States)

    Berthon, Beatrice; Häggström, Ida; Apte, Aditya; Beattie, Bradley J.; Kirov, Assen S.; Humm, John L.; Marshall, Christopher; Spezi, Emiliano; Larsson, Anne; Schmidtlein, C. Ross

    2016-01-01

    Purpose This work describes PETSTEP (PET Simulator of Tracers via Emission Projection): a faster and more accessible alternative to Monte Carlo (MC) simulation generating realistic PET images, for studies assessing image features and segmentation techniques. Methods PETSTEP was implemented within Matlab as open source software. It allows generating three-dimensional PET images from PET/CT data or synthetic CT and PET maps, with user-drawn lesions and user-set acquisition and reconstruction parameters. PETSTEP was used to reproduce images of the NEMA body phantom acquired on a GE Discovery 690 PET/CT scanner, and simulated with MC for the GE Discovery LS scanner, and to generate realistic Head and Neck scans. Finally the sensitivity (S) and Positive Predictive Value (PPV) of three automatic segmentation methods were compared when applied to the scanner-acquired and PETSTEP-simulated NEMA images. Results PETSTEP produced 3D phantom and clinical images within 4 and 6 min respectively on a single core 2.7 GHz computer. PETSTEP images of the NEMA phantom had mean intensities within 2% of the scanner-acquired image for both background and largest insert, and 16% larger background Full Width at Half Maximum. Similar results were obtained when comparing PETSTEP images to MC simulated data. The S and PPV obtained with simulated phantom images were statistically significantly lower than for the original images, but led to the same conclusions with respect to the evaluated segmentation methods. Conclusions PETSTEP allows fast simulation of synthetic images reproducing scanner-acquired PET data and shows great promise for the evaluation of PET segmentation methods. PMID:26321409

  6. Breast mass segmentation in digital mammography based on pulse coupled neural network and level set method

    Science.gov (United States)

    Xie, Weiying; Ma, Yide; Li, Yunsong

    2015-05-01

    A novel approach to mammographic image segmentation, termed as PCNN-based level set algorithm, is presented in this paper. Just as its name implies, a method based on pulse coupled neural network (PCNN) in conjunction with the variational level set method for medical image segmentation. To date, little work has been done on detecting the initial zero level set contours based on PCNN algorithm for latterly level set evolution. When all the pixels of the input image are fired by PCNN, the small pixel value will be a much more refined segmentation. In mammographic image, the breast tumor presents big pixel value. Additionally, the mammographic image with predominantly dark region, so that we firstly obtain the negative of mammographic image with predominantly dark region except the breast tumor before all the pixels of an input image are fired by PCNN. Therefore, in here, PCNN algorithm is employed to achieve mammary-specific, initial mass contour detection. After that, the initial contours are all extracted. We define the extracted contours as the initial zero level set contours for automatic mass segmentation by variational level set in mammographic image analysis. What's more, a new proposed algorithm improves external energy of variational level set method in terms of mammographic images in low contrast. In accordance with the gray scale of mass region in mammographic image is higher than the region surrounded, so the Laplace operator is used to modify external energy, which could make the bright spot becoming much brighter than the surrounded pixels in the image. A preliminary evaluation of the proposed method performs on a known public database namely MIAS, rather than synthetic images. The experimental results demonstrate that our proposed approach can potentially obtain better masses detection results in terms of sensitivity and specificity. Ultimately, this algorithm could lead to increase both sensitivity and specificity of the physicians' interpretation of

  7. Automated Segmentation of Coronary Arteries Based on Statistical Region Growing and Heuristic Decision Method

    Directory of Open Access Journals (Sweden)

    Yun Tian

    2016-01-01

    Full Text Available The segmentation of coronary arteries is a vital process that helps cardiovascular radiologists detect and quantify stenosis. In this paper, we propose a fully automated coronary artery segmentation from cardiac data volume. The method is built on a statistics region growing together with a heuristic decision. First, the heart region is extracted using a multi-atlas-based approach. Second, the vessel structures are enhanced via a 3D multiscale line filter. Next, seed points are detected automatically through a threshold preprocessing and a subsequent morphological operation. Based on the set of detected seed points, a statistics-based region growing is applied. Finally, results are obtained by setting conservative parameters. A heuristic decision method is then used to obtain the desired result automatically because parameters in region growing vary in different patients, and the segmentation requires full automation. The experiments are carried out on a dataset that includes eight-patient multivendor cardiac computed tomography angiography (CTA volume data. The DICE similarity index, mean distance, and Hausdorff distance metrics are employed to compare the proposed algorithm with two state-of-the-art methods. Experimental results indicate that the proposed algorithm is capable of performing complete, robust, and accurate extraction of coronary arteries.

  8. Dimensionless Analysis of Segmented Constrained Layer Damping Treatments with Modal Strain Energy Method

    Directory of Open Access Journals (Sweden)

    Shitao Tian

    2016-01-01

    Full Text Available Constrained layer damping treatments promise to be an effective method to control vibration in flexible structures. Cutting both the constraining layer and the viscoelastic layer, which leads to segmentation, increases the damping efficiency. However, this approach is not always effective. A parametric study was carried out using modal strain energy method to explore interaction between segmentation and design parameters, including geometry parameters and material properties. A finite element model capable of handling treatments with extremely thin viscoelastic layer was developed based on interlaminar continuous shear stress theories. Using the developed method, influence of placing cuts and change in design parameters on the shear strain field inside the viscoelastic layer was analyzed, since most design parameters act on the damping efficiency through their influence on the shear strain field. Furthermore, optimal cut arrangements were obtained by adopting a genetic algorithm. Subject to a weight limitation, symmetric and asymmetric configurations were compared. It was shown that symmetric configurations always presented higher damping. Segmentation was found to be suitable for treatments with relatively thin viscoelastic layer. Provided that optimal viscoelastic layer thickness was selected, placing cuts would only be applicable to treatments with low shear strain level inside the viscoelastic layer.

  9. A chance-constrained programming level set method for longitudinal segmentation of lung tumors in CT.

    Science.gov (United States)

    Rouchdy, Youssef; Bloch, Isabelle

    2011-01-01

    This paper presents a novel stochastic level set method for the longitudinal tracking of lung tumors in computed tomography (CT). The proposed model addresses the limitations of registration based and segmentation based methods for longitudinal tumor tracking. It combines the advantages of each approach using a new probabilistic framework, namely Chance-Constrained Programming (CCP). Lung tumors can shrink or grow over time, which can be reflected in large changes of shape, appearance and volume in CT images. Traditional level set methods with a priori knowledge about shape are not suitable since the tumors are undergoing random and large changes in shape. Our CCP level set model allows to introduce a flexible prior to track structures with a highly variable shape by permitting a constraint violation of the prior up to a specified probability level. The chance constraints are computed from two given points by the user or from segmented tumors from a reference image. The reference image can be one of the images studied or an external template. We present a numerical scheme to approximate the solution of the proposed model and apply it to track lung tumors in CT. Finally, we compare our approach with a Bayesian level set. The CCP level set model gives the best results: it is more coherent with the manual segmentation.

  10. Automated choroid segmentation in three-dimensional 1 - μ m wide-view OCT images with gradient and regional costs

    Science.gov (United States)

    Shi, Fei; Tian, Bei; Zhu, Weifang; Xiang, Dehui; Zhou, Lei; Xu, Haobo; Chen, Xinjian

    2016-12-01

    Choroid thickness and volume estimated from optical coherence tomography (OCT) images have emerged as important metrics in disease management. This paper presents an automated three-dimensional (3-D) method for segmenting the choroid from 1-μm wide-view swept source OCT image volumes, including the Bruch's membrane (BM) and the choroidal-scleral interface (CSI) segmentation. Two auxiliary boundaries are first detected by modified Canny operators and then the optical nerve head is detected and removed. The BM and the initial CSI segmentation are achieved by 3-D multiresolution graph search with gradient-based cost. The CSI is further refined by adding a regional cost, calculated from the wavelet-based gradual intensity distance. The segmentation accuracy is quantitatively evaluated on 32 normal eyes by comparing with manual segmentation and by reproducibility test. The mean choroid thickness difference from the manual segmentation is 19.16±4.32 μm, the mean Dice similarity coefficient is 93.17±1.30%, and the correlation coefficients between fovea-centered volumes obtained on repeated scans are larger than 0.97.

  11. Evaluating the effects of white matter multiple sclerosis lesions on the volume estimation of 6 brain tissue segmentation methods.

    Science.gov (United States)

    Valverde, S; Oliver, A; Díez, Y; Cabezas, M; Vilanova, J C; Ramió-Torrentà, L; Rovira, À; Lladó, X

    2015-06-01

    The accuracy of automatic tissue segmentation methods can be affected by the presence of hypointense white matter lesions during the tissue segmentation process. Our aim was to evaluate the impact of MS white matter lesions on the brain tissue measurements of 6 well-known segmentation techniques. These include straightforward techniques such as Artificial Neural Network and fuzzy C-means as well as more advanced techniques such as the Fuzzy And Noise Tolerant Adaptive Segmentation Method, fMRI of the Brain Automated Segmentation Tool, SPM5, and SPM8. Thirty T1-weighted images from patients with MS from 3 different scanners were segmented twice, first including white matter lesions and then masking the lesions before segmentation and relabeling as WM afterward. The differences in total tissue volume and tissue volume outside the lesion regions were computed between the images by using the 2 methodologies. Total gray matter volume was overestimated by all methods when lesion volume increased. The tissue volume outside the lesion regions was also affected by white matter lesions with differences up to 20 cm(3) on images with a high lesion load (≈50 cm(3)). SPM8 and Fuzzy And Noise Tolerant Adaptive Segmentation Method were the methods less influenced by white matter lesions, whereas the effect of white matter lesions was more prominent on fuzzy C-means and the fMRI of the Brain Automated Segmentation Tool. Although lesions were removed after segmentation to avoid their impact on tissue segmentation, the methods still overestimated GM tissue in most cases. This finding is especially relevant because on images with high lesion load, this bias will most likely distort actual tissue atrophy measurements. © 2015 by American Journal of Neuroradiology.

  12. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    Science.gov (United States)

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  13. Supervised methods for detection and segmentation of tissues in clinical lumbar MRI.

    Science.gov (United States)

    Ghosh, Subarna; Chaudhary, Vipin

    2014-10-01

    Lower back pain (LBP) is widely prevalent all over the world and more than 80% of the people suffer from LBP at some point of their lives. Moreover, a shortage of radiologists is the most pressing cause for the need of CAD (computer-aided diagnosis) systems. Automatic localization and labeling of intervertebral discs from lumbar MRI is the first step towards computer-aided diagnosis of lower back ailments. Subsequently, for diagnosis and characterization (quantification and localization) of abnormalities like disc herniation and stenosis, a completely automatic segmentation of intervertebral discs and the dural sac is extremely important. Contribution of this paper towards clinical CAD systems is two-fold. First, we propose a method to automatically detect all visible intervertebral discs in clinical sagittal MRI using heuristics and machine learning techniques. We provide a novel end-to-end framework that outputs a tight bounding box for each disc, instead of simply marking the centroid of discs, as has been the trend in the recent past. Second, we propose a method to simultaneously segment all the tissues (vertebrae, intervertebral disc, dural sac and background) in a lumbar sagittal MRI, using an auto-context approach instead of any explicit shape features or models. Past work tackles the lumbar segmentation problem on a tissue/organ basis, and which tend to perform poorly in clinical scans due to high variability in appearance. We, on the other hand, train a series of robust classifiers (random forests) using image features and sparsely sampled context features, which implicitly represent the shape and configuration of the image. Both these methods have been tested on a huge clinical dataset comprising of 212 cases and show very promising results for both disc detection (98% disc localization accuracy and 2.08mm mean deviation) and sagittal MRI segmentation (dice similarity indices of 0.87 and 0.84 for the dural sac and the inter-vertebral disc, respectively

  14. CAMSHIFT IMPROVEMENT WITH MEAN-SHIFT SEGMENTATION, REGION GROWING, AND SURF METHOD

    Directory of Open Access Journals (Sweden)

    Ferdinan Ferdinan

    2013-10-01

    Full Text Available CAMSHIFT algorithm has been widely used in object tracking. CAMSHIFT utilizescolor features as the model object. Thus, original CAMSHIFT may fail when the object color issimilar with the background color. In this study, we propose CAMSHIFT tracker combined withmean-shift segmentation, region growing, and SURF in order to improve the tracking accuracy.The mean-shift segmentation and region growing are applied in object localization phase to extractthe important parts of the object. Hue-distance, saturation, and value are used to calculate theBhattacharyya distance to judge whether the tracked object is lost. Once the object is judged lost,SURF is used to find the lost object, and CAMSHIFT can retrack the object. The Object trackingsystem is built with OpenCV. Some measurements of accuracy have done using frame-basedmetrics. We use datasets BoBoT (Bonn Benchmark on Tracking to measure accuracy of thesystem. The results demonstrate that CAMSHIFT combined with mean-shift segmentation, regiongrowing, and SURF method has higher accuracy than the previous methods.

  15. A fully automated method for segmentation and classification of local field potential recordings. Preliminary results.

    Science.gov (United States)

    Diaz-Parra, Antonio; Canals, Santiago; Moratal, David

    2017-07-01

    Identification of brain states measured with electrophysiological methods such as electroencephalography and local field potential (LFP) recordings is of great importance in numerous neuroscientific applications. For instance, in Brain Computer Interface, in the diagnosis of neurological disorders as well as to investigate how brain rhythms stem from synchronized physiological mechanisms (e.g., memory and learning). In this work, we propose a fully automated method with the aim of partitioning LFP signals into stationary segments as well as classifying each detected segment into three different classes (delta, regular theta or irregular theta rhythms). Our approach is computationally efficient since the process of detection and partition of signals into stationary segments is only based on two features (the variance and the so-called spectral error measure) and allow the classification at the same time. We developed the algorithm upon analyzing six anesthetized rats, resulting in a true positive rate of 97.5%, 91.8% and 79.1% in detecting delta, irregular theta and regular theta rhythms, respectively. This preliminary quantitative evaluation offers encouraging results for further research.

  16. The method of segmentation of leukocytes in information-measuring systems on the basis of light microscopy

    Science.gov (United States)

    Nikitaev, V. G.; Pronichev, A. N.; Polyakov, E. V.; Zaharenko, Yu V.

    2018-01-01

    The paper considers the problem of leukocytes segmentation in microscopic images of bone marrow smears for automated diagnosis of the blood system diseases. The method was proposed to solve the problem of segmentation of contacting leukocytes in images of bone marrow smears. The method is based on the analysis of structure of objects of a separation and distances filter in combination with the watershed method and distance transformation method.

  17. Implementation of Segmentation Methods for the Diagnosis and Prognosis of Mild Cognitive Impairment and Alzheimer Disease

    Science.gov (United States)

    Matoug, S.; Abdel-Dayem, A.

    2012-02-01

    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging is a good tool to predict conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease. Since volumetric MRI can detect changes in the size of brain regions, measuring those regions that atrophy during the progress of Alzheimer's disease can help the neurologist in his diagnostic. In the present investigation, we present an automatic tool that reads volumetric MRI and performs 2-dimensional (volume slices) and volumetric segmentation methods in order to segment gray matter, white matter and cerebrospinal fluid (CSF). We used the MRI data sets database from the Open Access Series of Imaging Studies (OASIS).

  18. Diffusion-weighted magnetic resonance imaging during radiotherapy of locally advanced cervical cancer - treatment response assessment using different segmentation methods

    DEFF Research Database (Denmark)

    Haack, Søren; Tanderup, Kari; Kallehauge, Jesper Folsted

    2015-01-01

    .01), and the volumes changed significantly during treatment (p clustering (mean± sd: 0...... into external beam RT (WK2RT) and one week prior to brachytherapy (PREBT). Volumes on DW-MRI were segmented using three semi-automatic segmentation methods: "cluster analysis", "relative signal intensity (SD4)" and "region growing". Segmented volumes were compared to the gross tumor volume (GTV) identified on T.......52 ± 0.3). There was no significant difference in mean ADC value compared at same treatment time. Mean tumor ADC value increased significantly (p treatment time. CONCLUSION: Among the three semi-automatic segmentations of hyper-intense intensities on DW-MR images...

  19. Comparison of pattern detection methods in microarray time series of the segmentation clock.

    Directory of Open Access Journals (Sweden)

    Mary-Lee Dequéant

    2008-08-01

    Full Text Available While genome-wide gene expression data are generated at an increasing rate, the repertoire of approaches for pattern discovery in these data is still limited. Identifying subtle patterns of interest in large amounts of data (tens of thousands of profiles associated with a certain level of noise remains a challenge. A microarray time series was recently generated to study the transcriptional program of the mouse segmentation clock, a biological oscillator associated with the periodic formation of the segments of the body axis. A method related to Fourier analysis, the Lomb-Scargle periodogram, was used to detect periodic profiles in the dataset, leading to the identification of a novel set of cyclic genes associated with the segmentation clock. Here, we applied to the same microarray time series dataset four distinct mathematical methods to identify significant patterns in gene expression profiles. These methods are called: Phase consistency, Address reduction, Cyclohedron test and Stable persistence, and are based on different conceptual frameworks that are either hypothesis- or data-driven. Some of the methods, unlike Fourier transforms, are not dependent on the assumption of periodicity of the pattern of interest. Remarkably, these methods identified blindly the expression profiles of known cyclic genes as the most significant patterns in the dataset. Many candidate genes predicted by more than one approach appeared to be true positive cyclic genes and will be of particular interest for future research. In addition, these methods predicted novel candidate cyclic genes that were consistent with previous biological knowledge and experimental validation in mouse embryos. Our results demonstrate the utility of these novel pattern detection strategies, notably for detection of periodic profiles, and suggest that combining several distinct mathematical approaches to analyze microarray datasets is a valuable strategy for identifying genes that

  20. Comparison of Pattern Detection Methods in Microarray Time Series of the Segmentation Clock

    Science.gov (United States)

    Dequéant, Mary-Lee; Ahnert, Sebastian; Edelsbrunner, Herbert; Fink, Thomas M. A.; Glynn, Earl F.; Hattem, Gaye; Kudlicki, Andrzej; Mileyko, Yuriy; Morton, Jason; Mushegian, Arcady R.; Pachter, Lior; Rowicka, Maga; Shiu, Anne; Sturmfels, Bernd; Pourquié, Olivier

    2008-01-01

    While genome-wide gene expression data are generated at an increasing rate, the repertoire of approaches for pattern discovery in these data is still limited. Identifying subtle patterns of interest in large amounts of data (tens of thousands of profiles) associated with a certain level of noise remains a challenge. A microarray time series was recently generated to study the transcriptional program of the mouse segmentation clock, a biological oscillator associated with the periodic formation of the segments of the body axis. A method related to Fourier analysis, the Lomb-Scargle periodogram, was used to detect periodic profiles in the dataset, leading to the identification of a novel set of cyclic genes associated with the segmentation clock. Here, we applied to the same microarray time series dataset four distinct mathematical methods to identify significant patterns in gene expression profiles. These methods are called: Phase consistency, Address reduction, Cyclohedron test and Stable persistence, and are based on different conceptual frameworks that are either hypothesis- or data-driven. Some of the methods, unlike Fourier transforms, are not dependent on the assumption of periodicity of the pattern of interest. Remarkably, these methods identified blindly the expression profiles of known cyclic genes as the most significant patterns in the dataset. Many candidate genes predicted by more than one approach appeared to be true positive cyclic genes and will be of particular interest for future research. In addition, these methods predicted novel candidate cyclic genes that were consistent with previous biological knowledge and experimental validation in mouse embryos. Our results demonstrate the utility of these novel pattern detection strategies, notably for detection of periodic profiles, and suggest that combining several distinct mathematical approaches to analyze microarray datasets is a valuable strategy for identifying genes that exhibit novel

  1. Nondestructive Damage Assessment of Composite Structures Based on Wavelet Analysis of Modal Curvatures: State-of-the-Art Review and Description of Wavelet-Based Damage Assessment Benchmark

    Directory of Open Access Journals (Sweden)

    Andrzej Katunin

    2015-01-01

    Full Text Available The application of composite structures as elements of machines and vehicles working under various operational conditions causes degradation and occurrence of damage. Considering that composites are often used for responsible elements, for example, parts of aircrafts and other vehicles, it is extremely important to maintain them properly and detect, localize, and identify the damage occurring during their operation in possible early stage of its development. From a great variety of nondestructive testing methods developed to date, the vibration-based methods seem to be ones of the least expensive and simultaneously effective with appropriate processing of measurement data. Over the last decades a great popularity of vibration-based structural testing has been gained by wavelet analysis due to its high sensitivity to a damage. This paper presents an overview of results of numerous researchers working in the area of vibration-based damage assessment supported by the wavelet analysis and the detailed description of the Wavelet-based Structural Damage Assessment (WavStructDamAs Benchmark, which summarizes the author’s 5-year research in this area. The benchmark covers example problems of damage identification in various composite structures with various damage types using numerous wavelet transforms and supporting tools. The benchmark is openly available and allows performing the analysis on the example problems as well as on its own problems using available analysis tools.

  2. Automatic morphological characterization of nanobubbles with a novel image segmentation method and its application in the study of nanobubble coalescence

    Directory of Open Access Journals (Sweden)

    Yuliang Wang

    2015-04-01

    Full Text Available Nanobubbles (NBs on hydrophobic surfaces in aqueous solvents have shown great potential in numerous applications. In this study, the morphological characterization of NBs in AFM images was carried out with the assistance of a novel image segmentation method. The method combines the classical threshold method and a modified, active contour method to achieve optimized image segmentation. The image segmentation results obtained with the classical threshold method and the proposed, modified method were compared. With the modified method, the diameter, contact angle, and radius of curvature were automatically measured for all NBs in AFM images. The influence of the selection of the threshold value on the segmentation result was discussed. Moreover, the morphological change in the NBs was studied in terms of density, covered area, and volume occurring during coalescence under external disturbance.

  3. Automatic lung segmentation method for MRI-based lung perfusion studies of patients with chronic obstructive pulmonary disease.

    Science.gov (United States)

    Kohlmann, Peter; Strehlow, Jan; Jobst, Betram; Krass, Stefan; Kuhnigk, Jan-Martin; Anjorin, Angela; Sedlaczek, Oliver; Ley, Sebastian; Kauczor, Hans-Ulrich; Wielpütz, Mark Oliver

    2015-04-01

    A novel fully automatic lung segmentation method for magnetic resonance (MR) images of patients with chronic obstructive pulmonary disease (COPD) is presented. The main goal of this work was to ease the tedious and time-consuming task of manual lung segmentation, which is required for region-based volumetric analysis of four-dimensional MR perfusion studies which goes beyond the analysis of small regions of interest. The first step in the automatic algorithm is the segmentation of the lungs in morphological MR images with higher spatial resolution than corresponding perfusion MR images. Subsequently, the segmentation mask of the lungs is transferred to the perfusion images via nonlinear registration. Finally, the masks for left and right lungs are subdivided into a user-defined number of partitions. Fourteen patients with two time points resulting in 28 perfusion data sets were available for the preliminary evaluation of the developed methods. Resulting lung segmentation masks are compared with reference segmentations from experienced chest radiologists, as well as with total lung capacity (TLC) acquired by full-body plethysmography. TLC results were available for thirteen patients. The relevance of the presented method is indicated by an evaluation, which shows high correlation between automatically generated lung masks with corresponding ground-truth estimates. The evaluation of the developed methods indicates good accuracy and shows that automatically generated lung masks differ from expert segmentations about as much as segmentations from different experts.

  4. Developing suitable methods for effective characterization of electrical properties of root segments

    Science.gov (United States)

    Ehosioke, Solomon; Phalempin, Maxime; Garré, Sarah; Kemna, Andreas; Huisman, Sander; Javaux, Mathieu; Nguyen, Frédéric

    2017-04-01

    The root system represents the hidden half of the plant which plays a key role in food production and therefore needs to be well understood. Root system characterization has been a great challenge because the roots are buried in the soil. This coupled with the subsurface heterogeneity and the transient nature of the biogeochemical processes that occur in the root zone makes it difficult to access and monitor the root system over time. The traditional method of point sampling (root excavation, monoliths, minirhizotron etc.) for root investigation does not account for the transient nature and spatial variability of the root zone, and it often disturbs the natural system under investigation. The quest to overcome these challenges has led to an increase in the application of geophysical methods. Recent studies have shown a correlation between bulk electrical resistivity and root mass density, but an understanding of the contribution of the individual segments of the root system to that bulk signal is still missing. This study is an attempt to understand the electrical properties of roots at the segment scale (1-5cm) for more effective characterization of electrical signal of the full root architecture. The target plants were grown in three different media (pot soil, hydroponics and a mixture of sand, perlite and vermiculite). Resistance measurements were carried out on a single segment of each study plant using a voltmeter while the diameter was measured using a digital calliper. The axial resistance was calculated using the measured resistance and the geometric parameters. This procedure was repeated for each plant replica over a period of 75 days which enabled us to study the effects of age, growth media, diameter and length on the electrical response of the root segments of the selected plants. The growth medium was found to have a significant effect on the root electrical response, while the effect of root diameter on their electrical response was found to vary

  5. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts

    Energy Technology Data Exchange (ETDEWEB)

    Berthon, B., E-mail: BerthonB@cardiff.ac.uk; Marshall, C. [Wales Research and Diagnostic Positron Emission Tomography Imaging Centre, Cardiff CF14 4XN (United Kingdom); Evans, M. [Velindre Cancer Centre, Cardiff CF14 2TL (United Kingdom); Spezi, E. [Department of Medical Physics, Velindre Cancer Centre, Cardiff CF14 2TL (United Kingdom)

    2014-02-15

    Purpose: The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Methods: Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. Results: The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC < 0.5), caused by a failure to recover the object geometry. The region-growing method reached high levels of accuracy for most inserts (DSC > 0.76 except for tori) but showed the largest

  6. A novel colonic polyp volume segmentation method for computer tomographic colonography

    Science.gov (United States)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Song, Bowen; Peng, Hao; Wang, Yunhong; Wang, Lihua; Liang, Zhengrong

    2014-03-01

    Colorectal cancer is the third most common type of cancer. However, this disease can be prevented by detection and removal of precursor adenomatous polyps after the diagnosis given by experts on computer tomographic colonography (CTC). During CTC diagnosis, the radiologist looks for colon polyps and measures not only the size but also the malignancy. It is a common sense that to segment polyp volumes from their complicated growing environment is of much significance for accomplishing the CTC based early diagnosis task. Previously, the polyp volumes are mainly given from the manually or semi-automatically drawing by the radiologists. As a result, some deviations cannot be avoided since the polyps are usually small (6~9mm) and the radiologists' experience and knowledge are varying from one to another. In order to achieve automatic polyp segmentation carried out by the machine, we proposed a new method based on the colon decomposition strategy. We evaluated our algorithm on both phantom and patient data. Experimental results demonstrate our approach is capable of segment the small polyps from their complicated growing background.

  7. A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain.

    Science.gov (United States)

    Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel

    2016-04-01

    Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.

  8. Brain volumetric measures in alcoholics: a comparison of two segmentation methods

    Directory of Open Access Journals (Sweden)

    Marlene Oscar-Berman

    2011-02-01

    Full Text Available Marlene Oscar-Berman1–4, Janet Song5,61Department of Psychiatry, 2Department of Neurology, 3Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA; 4VA Boston Healthcare System, Boston, MA, USA; 5Research Scientist Institute, Center for Excellence in Education, Massachusetts Institute of Technology, Cambridge, MA, USA; 6Harvard College, Cambridge, MA, USAAbstract: Measures of regional brain volumes, which can be derived from magnetic resonance imaging (MRI images by dividing a brain into its constituent parts, can be used as structural indicators of many different neuroanatomical diseases and disorders, including alcoholism. Reducing the time and cost required for brain segmentation would greatly facilitate both clinical and research endeavors. In the present study, we compared two segmentation methods to measure brain volumes in alcoholic and nonalcoholic control subjects: 1 an automated system (FreeSurfer and 2 a semi-automated, supervised system (Cardviews, developed by the Center for Morphometric Analysis [CMA] at Massachusetts General Hospital, which requires extensive staff and oversight. The participants included 32 abstinent alcoholics (19 women and 37 demographically matched, nonalcoholic controls (17 women. Brain scans were acquired in a 3 Tesla MRI scanner. The FreeSurfer and CMA methods showed good agreement for the lateral ventricles, cerebral white matter, caudate, and thalamus. In general, the larger the brain structure, the closer the agreement between the methods, except for the cerebral cortex, which showed large between-method differences. However, several other discrepancies existed between the FreeSurfer and CMA volume measures of alcoholics’ brains. The CMA volumes, but not FreeSurfer, demonstrated that the thalamus, caudate, and putamen were significantly smaller in male alcoholics as compared with male controls. Additionally, the hippocampus was significantly smaller in alcoholic

  9. Validation of volumetric and single-slice MRI adipose analysis using a novel fully automated segmentation method.

    Science.gov (United States)

    Addeman, Bryan T; Kutty, Shelby; Perkins, Thomas G; Soliman, Abraam S; Wiens, Curtis N; McCurdy, Colin M; Beaton, Melanie D; Hegele, Robert A; McKenzie, Charles A

    2015-01-01

    To validate a fully automated adipose segmentation method with magnetic resonance imaging (MRI) fat fraction abdominal imaging. We hypothesized that this method is suitable for segmentation of subcutaneous adipose tissue (SAT) and intra-abdominal adipose tissue (IAAT) in a wide population range, easy to use, works with a variety of hardware setups, and is highly repeatable. Analysis was performed comparing precision and analysis time of manual and automated segmentation of single-slice imaging, and volumetric imaging (78-88 slices). Volumetric and single-slice data were acquired in a variety of cohorts (body mass index [BMI] 15.6-41.76) including healthy adult volunteers, adolescent volunteers, and subjects with nonalcoholic fatty liver disease and lipodystrophies. A subset of healthy volunteers was analyzed for repeatability in the measurements. The fully automated segmentation was found to have excellent agreement with manual segmentation with no substantial bias across all study cohorts. Repeatability tests showed a mean coefficient of variation of 1.2 ± 0.6% for SAT, and 2.7 ± 2.2% for IAAT. Analysis with automated segmentation was rapid, requiring 2 seconds per slice compared with 8 minutes per slice with manual segmentation. We demonstrate the ability to accurately and rapidly segment regional adipose tissue using fat fraction maps across a wide population range, with varying hardware setups and acquisition methods. © 2014 Wiley Periodicals, Inc.

  10. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts.

    Science.gov (United States)

    Berthon, B; Marshall, C; Evans, M; Spezi, E

    2014-02-01

    The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC 0.76 except for tori) but showed the largest errors in the recovery of pears and drops dimensions (higher than 10% and 30% of the true length, respectively). Large errors were visible for one of the gradient

  11. The evolution of spillover effects between oil and stock markets across multi-scales using a wavelet-based GARCH-BEKK model

    Science.gov (United States)

    Liu, Xueyong; An, Haizhong; Huang, Shupei; Wen, Shaobo

    2017-01-01

    Aiming to investigate the evolution of mean and volatility spillovers between oil and stock markets in the time and frequency dimensions, we employed WTI crude oil prices, the S&P 500 (USA) index and the MICEX index (Russia) for the period Jan. 2003-Dec. 2014 as sample data. We first applied a wavelet-based GARCH-BEKK method to examine the spillover features in frequency dimension. To consider the evolution of spillover effects in time dimension at multiple-scales, we then divided the full sample period into three sub-periods, pre-crisis period, crisis period, and post-crisis period. The results indicate that spillover effects vary across wavelet scales in terms of strength and direction. By analysis the time-varying linkage, we found the different evolution features of spillover effects between the Oil-US stock market and Oil-Russia stock market. The spillover relationship between oil and US stock market is shifting to short-term while the spillover relationship between oil and Russia stock market is changing to all time scales. That result implies that the linkage between oil and US stock market is weakening in the long-term, and the linkage between oil and Russia stock market is getting close in all time scales. This may explain the phenomenon that the US stock index and the Russia stock index showed the opposite trend with the falling of oil price in the post-crisis period.

  12. A new intelligent method for minerals segmentation in thin sections based on a novel incremental color clustering

    Science.gov (United States)

    Izadi, Hossein; Sadri, Javad; Mehran, Nosrat-Agha

    2015-08-01

    Mineral segmentation in thin sections is a challenging, popular, and important research topic in computational geology, mineralogy, and mining engineering. Mineral segmentation in thin sections containing altered minerals, in which there are no evident and close boundaries, is a rather complex process. Most of the thin sections created in industries include altered minerals. However, intelligent mineral segmentation in thin sections containing altered minerals has not been widely investigated in the literature, and the current state of the art algorithms are not able to accurately segment minerals in such thin sections. In this paper, a novel method based on incremental learning for clustering pixels is proposed in order to segment index minerals in both thin sections with and without altered minerals. Our algorithm uses 12 color features that are extracted from thin section images. These features include red, green, blue, hue, saturation and intensity, under plane and cross polarized lights in maximum intensity situation. The proposed method has been tested on 155 igneous samples and the overall accuracy of 92.15% and 85.24% has been obtained for thin sections without altered minerals and thin sections containing altered minerals, respectively. Experimental results indicate that the proposed method outperforms the results of other similar methods in the literature, especially for segmenting thin sections containing altered minerals. The proposed algorithm could be applied in applications which require a real time segmentation or efficient identification map such as petroleum geology, petrography and NASA Mars explorations.

  13. A perceptually oriented method for contrast enhancement and segmentation of dermoscopy images.

    Science.gov (United States)

    Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar; Mushtaq, Qaisar

    2013-02-01

    Dermoscopy images often suffer from low contrast caused by different light conditions, which reduces the accuracy of lesion border detection. Accordingly for lesion recognition, automatic melanoma border detection (MBD) is an initial as well as crucial task. In this article, a novel perceptually oriented approach for MBD is presented by combing region and edge-based segmentation techniques. The MBD system for color contrast and segmentation improvement consists of four main steps: first, the RGB dermoscopy image is transformed to CIE L*a*b* color space, lesion contrast is then enhanced by adjusting and mapping the intensity values of the lesion pixels in the specified range using the three channels of CIE L*a*b*, a hill-climbing algorithm is used later to detect region-of-interest (ROI) map in a perceptually oriented color space using color channels (L*,a*,b*) and finally, an adaptive thresholding is applied to determine the optimal lesion border. Manually drawn borders obtained from an experienced dermatologist are utilized as a ground truth for performance evaluation. The proposed MBD method is tested on a total of 100 dermoscopy images. A comparative study with three state-of-the-art color and texture-based segmentation techniques (JSeg, dermatologists-like tumor area extraction: DTEA and region-based active contours: RAC), is also conducted to show the effectiveness of our MBD method using measures of true positive rate (TPR), false positive rate (FPR), and error probability (EP). Among different algorithms, our MBD algorithm achieved TPR of 94.25%, FPR of 3.56%, and EP of 4%. The proposed MBD approach is highly accurate to detect the lesion border area. The MBD software and sample of dermoscopy images can be downloaded at http://cs.ntu.edu.pk/research.php. © 2012 John Wiley & Sons A/S.

  14. A novel physics-inspired method for image region segmentation by imitating the carrier immigration in semiconductor materials

    Directory of Open Access Journals (Sweden)

    Zhuang Xiaodong

    2017-01-01

    Full Text Available A novel method for image region segmentation is proposed, which is inspired by the carrier immigration mechanism in semiconductor materials. The carrier diffusing and drifting are simulated in the proposed model, and the sign distribution of net carrier at the model’s balance state is exploited for region segmentation. The experiments have been done for test images and real world images, which prove the effectiveness of the proposed method.

  15. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  16. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF

    Directory of Open Access Journals (Sweden)

    G. Sandhya

    2017-01-01

    Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.

  17. Segmentation methods for breast vasculature in dual-energy contrast-enhanced digital breast tomosynthesis

    Science.gov (United States)

    Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.

    2015-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.

  18. Predictive efficiency of distinct color image segmentation methods for measuring intramuscular fat in beef

    Directory of Open Access Journals (Sweden)

    Renius Mello

    2015-10-01

    Full Text Available Intramuscular fat (IMF influences important quality characteristics of meat, such as flavor, juiciness, palatability, odor and tenderness. Thus, the objective of this study was to apply the following image processing techniques to quantify the IMF in beef: palette; sampling, interval of coordinates; black and white threshold; and discriminant function of colors. Thirty-five samples of beef, with a wide range of IMF, were used. Color images were taken of the meat samples from different muscles, with variability in the IMF content. The IMF of a thin cross-section meat was determined by chemical lipid extraction and was predicted by image analysis. The chemical method was compared with the image analysis. The segmentation procedures were validated by the adjustment of a linear regression equation to the series of values that were observed and predicted, as well as the regression parameters evaluated by the F-test. The predictive power of these approaches was also compared by residual analysis and by the decomposition of the mean square deviations. The results showed that the discriminant function was the best color segmentation method to measure intramuscular fat via digital images, but required adjustments in the prediction pattern.

  19. Evaluation of segmental body composition by gender in obese children using bioelectric impedance analysis method

    Directory of Open Access Journals (Sweden)

    İhsan Çetin

    2015-12-01

    Full Text Available Objective: In this study, it was aimed to evaluate segmental body composition of children diagnosed with obesity using bioelectrical impedance analysis method in terms of different gender. Methods: 48 children, aged between 6-15 years, 21 of whom were boys while 27 were girls, diagnosed with obesity in Erciyes University Medical Faculty Department of Pediatric Endocrinology Outpatient Clinic were included in our study from April to June in 2011. Those over 95 percentile were defined as obese group. Tanita BC-418 device was used to analyze the body composition. Results: As a result of bioelectrical impedance analysis, lean body mass and body muscle mass were found to be statistically significantly higher in obese girls compared with obese boys. However, lean mass of the left arm, left leg muscle mass and basal metabolic rate were found to be statistically significantly lower in obese girls compared with obese boys. Conclusion: Consequently, it may be suggest that segmental analysis, where gender differences are taken into account, can provide proper exercise pattern and healthy way of weight loss in children for prevention of obesity and associated diseases including obesity and type 2 diabetics and cardiovascular diseases.

  20. A new method for image segmentation based on Fuzzy C-means algorithm on pixonal images formed by bilateral filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara

    2013-01-01

    In this paper, a new pixon-based method is presented for image segmentation. In the proposed algorithm, bilateral filtering is used as a kernel function to form a pixonal image. Using this filter reduces the noise and smoothes the image slightly. By using this pixon-based method, the image over...... segmentation could be avoided. Indeed, the bilateral filtering, as a preprocessing step, eliminates the unnecessary details of the image and results in a few numbers of pixons, faster performance and more robustness against unwanted environmental noises. Then, the obtained pixonal image is segmented using...... the hierarchical clustering method (Fuzzy C-means algorithm). The experimental results show that the proposed pixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-based image segmentation techniques....

  1. Locally-Constrained Region-Based Methods for DW-MRI Segmentation

    National Research Council Canada - National Science Library

    Melonakos, John; Kubicki, Marek; Niethammer, Marc; Miller, James V; Mohan, Vandana; Tannenbaum, Allen

    2007-01-01

    .... In this work, we show results for segmenting the cingulum bundle. Finally, we explain how this approach and extensions thereto overcome a major problem that typical region-based flows experience when attempting to segment neural fiber bundles.

  2. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    Science.gov (United States)

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.

  3. Kidney Segmentation in CT Data Using Hybrid Level-Set Method with Ellipsoidal Shape Constraints

    Directory of Open Access Journals (Sweden)

    Skalski Andrzej

    2017-03-01

    Full Text Available With development of medical diagnostic and imaging techniques the sparing surgeries are facilitated. Renal cancer is one of examples. In order to minimize the amount of healthy kidney removed during the treatment procedure, it is essential to design a system that provides three-dimensional visualization prior to the surgery. The information about location of crucial structures (e.g. kidney, renal ureter and arteries and their mutual spatial arrangement should be delivered to the operator. The introduction of such a system meets both the requirements and expectations of oncological surgeons. In this paper, we present one of the most important steps towards building such a system: a new approach to kidney segmentation from Computed Tomography data. The segmentation is based on the Active Contour Method using the Level Set (LS framework. During the segmentation process the energy functional describing an image is the subject to minimize. The functional proposed in this paper consists of four terms. In contrast to the original approach containing solely the region and boundary terms, the ellipsoidal shape constraint was also introduced. This additional limitation imposed on evolution of the function prevents from leakage to undesired regions. The proposed methodology was tested on 10 Computed Tomography scans from patients diagnosed with renal cancer. The database contained the results of studies performed in several medical centers and on different devices. The average effectiveness of the proposed solution regarding the Dice Coefficient and average Hausdorff distance was equal to 0.862 and 2.37 mm, respectively. Both the qualitative and quantitative evaluations confirm effectiveness of the proposed solution.

  4. Method, Software and Aparatus for Segmenting a Series of 2D or 3D Images

    NARCIS (Netherlands)

    Noble, Nicholas M.I.; Spreeuwers, Lieuwe Jan; Breeuwer, Marcel

    2005-01-01

    he invention relates to an apparatus having means for segmenting a series of 2D or 3D images obtained by monitoring a patient's organ or other body part, wherein a first segmentation is carried out on a first image of the series of images and wherein the first segmentation is used for the subsequent

  5. Method, Software and Aparatus for Segmenting a Series of 2D or 3D Images

    NARCIS (Netherlands)

    Noble, Nicholas Michael Ian; Spreeuwers, Lieuwe Jan; Breeuwer, Marcel

    2010-01-01

    he invention relates to an apparatus having means for segmenting a series of 2D or 3D images obtained by monitoring a patient's organ or other body part, wherein a first segmentation is carried out on a first image of the series of images and wherein the first segmentation is used for the subsequent

  6. A Multi-Atlas Based Method for Automated Anatomical Rat Brain MRI Segmentation and Extraction of PET Activity

    Science.gov (United States)

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Introduction Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. Methods High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Results Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Conclusions Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure’s extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses. PMID:25330005

  7. Mandibular canine intrusion with the segmented arch technique: A finite element method study.

    Science.gov (United States)

    Caballero, Giselle Milagros; Carvalho Filho, Osvaldo Abadia de; Hargreaves, Bernardo Oliveira; Brito, Hélio Henrique de Araújo; Magalhães Júnior, Pedro Américo Almeida; Oliveira, Dauro Douglas

    2015-06-01

    Mandibular canines are anatomically extruded in approximately half of the patients with a deepbite. Although simultaneous orthodontic intrusion of the 6 mandibular anterior teeth is not recommended, a few studies have evaluated individual canine intrusion. Our objectives were to use the finite element method to simulate the segmented intrusion of mandibular canines with a cantilever and to evaluate the effects of different compensatory buccolingual activations. A finite element study of the right quadrant of the mandibular dental arch together with periodontal structures was modeled using SolidWorks software (Dassault Systèmes Americas, Waltham, Mass). After all bony, dental, and periodontal ligament structures from the second molar to the canine were graphically represented, brackets and molar tubes were modeled. Subsequently, a 0.021 × 0.025-in base wire was modeled with stainless steel properties and inserted into the brackets and tubes of the 4 posterior teeth to simulate an anchorage unit. Finally, a 0.017 × 0.025-in cantilever was modeled with titanium-molybdenum alloy properties and inserted into the first molar auxiliary tube. Discretization and boundary conditions of all anatomic structures tested were determined with HyperMesh software (Altair Engineering, Milwaukee, Wis), and compensatory toe-ins of 0°, 4°, 6°, and 8° were simulated with Abaqus software (Dassault Systèmes Americas). The 6° toe-in produced pure intrusion of the canine. The highest amounts of periodontal ligament stress in the anchor segment were observed around the first molar roots. This tooth showed a slight tendency for extrusion and distal crown tipping. Moreover, the different compensatory toe-ins tested did not significantly affect the other posterior teeth. The segmented mechanics simulated in this study may achieve pure mandibular canine intrusion when an adequate amount of compensatory toe-in (6°) is incorporated into the cantilever to prevent buccal and lingual crown

  8. Application of non-linear and wavelet based features for the automated identification of epileptic EEG signals.

    Science.gov (United States)

    Acharya, U Rajendra; Sree, S Vinitha; Alvin, Ang Peng Chuan; Yanti, Ratna; Suri, Jasjit S

    2012-04-01

    Epilepsy, a neurological disorder, is characterized by the recurrence of seizures. Electroencephalogram (EEG) signals, which are used to detect the presence of seizures, are non-linear and dynamic in nature. Visual inspection of the EEG signals for detection of normal, interictal, and ictal activities is a strenuous and time-consuming task due to the huge volumes of EEG segments that have to be studied. Therefore, non-linear methods are being widely used to study EEG signals for the automatic monitoring of epileptic activities. The aim of our work is to develop a Computer Aided Diagnostic (CAD) technique with minimal pre-processing steps that can classify all the three classes of EEG segments, namely normal, interictal, and ictal, using a small number of highly discriminating non-linear features in simple classifiers. To evaluate the technique, segments of normal, interictal, and ictal EEG segments (100 segments in each class) were used. Non-linear features based on the Higher Order Spectra (HOS), two entropies, namely the Approximation Entropy (ApEn) and the Sample Entropy (SampEn), and Fractal Dimension and Hurst Exponent were extracted from the segments. Significant features were selected using the ANOVA test. After evaluating the performance of six classifiers (Decision Tree, Fuzzy Sugeno Classifier, Gaussian Mixture Model, K-Nearest Neighbor, Support Vector Machine, and Radial Basis Probabilistic Neural Network) using a combination of the selected features, we found that using a set of all the selected six features in the Fuzzy classifier resulted in 99.7% classification accuracy. We have demonstrated that our technique is capable of achieving high accuracy using a small number of features that accurately capture the subtle differences in the three different types of EEG (normal, interictal, and ictal) segments. The technique can be easily written as a software application and used by medical professionals without any extensive training and cost. Such software

  9. Investigation of a novel image segmentation method dedicated to forest fire applications

    Science.gov (United States)

    Rudz, S.; Chetehouna, K.; Hafiane, A.; Laurent, H.; Séro-Guillaume, O.

    2013-07-01

    To face fire it is crucial to understand its behaviour in order to maximize fighting means. To achieve this task, the development of a metrological tool is necessary for estimating both geometrical and physical parameters involved in forest fire modelling. A key parameter is to estimate fire positions accurately. In this paper an image processing tool especially dedicated to an accurate extraction of fire from an image is presented. In this work, the clustering on several colour spaces is investigated and it appears that the blue chrominance Cb from the YCbCr colour space is the most appropriate. As a consequence, a new segmentation algorithm dedicated to forest fire applications has been built using first an optimized k-means clustering in the Cb-channel and then some properties of fire pixels in the RGB colour space. Next, the performance of the proposed method is evaluated using three supervised evaluation criteria and then compared to other existing segmentation algorithms in the literature. Finally a conclusion is drawn, assessing the good behaviour of the developed algorithm. This paper is dedicated to the memory of Dr Olivier Séro-Guillaume (1950-2013), CNRS Research Director.

  10. Scan Profiles Based Method for Segmentation and Extraction of Planar Objects in Mobile Laser Scanning Point Clouds

    Science.gov (United States)

    Nguyen, Hoang Long; Belton, David; Helmholz, Petra

    2016-06-01

    The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.

  11. SCAN PROFILES BASED METHOD FOR SEGMENTATION AND EXTRACTION OF PLANAR OBJECTS IN MOBILE LASER SCANNING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. L. Nguyen

    2016-06-01

    Full Text Available The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s. The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc. that can be used as the inputs for many processing steps (e.g. registration, modelling that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.

  12. Breast MRI segmentation for density estimation: Do different methods give the same results and how much do differences matter?

    Science.gov (United States)

    Doran, Simon J; Hipwell, John H; Denholm, Rachel; Eiben, Björn; Busana, Marta; Hawkes, David J; Leach, Martin O; Silva, Isabel Dos Santos

    2017-09-01

    To compare two methods of automatic breast segmentation with each other and with manual segmentation in a large subject cohort. To discuss the factors involved in selecting the most appropriate algorithm for automatic segmentation and, in particular, to investigate the appropriateness of overlap measures (e.g., Dice and Jaccard coefficients) as the primary determinant in algorithm selection. Two methods of breast segmentation were applied to the task of calculating MRI breast density in 200 subjects drawn from the Avon Longitudinal Study of Parents and Children, a large cohort study with an MRI component. A semiautomated, bias-corrected, fuzzy C-means (BC-FCM) method was combined with morphological operations to segment the overall breast volume from in-phase Dixon images. The method makes use of novel, problem-specific insights. The resulting segmentation mask was then applied to the corresponding Dixon water and fat images, which were combined to give Dixon MRI density values. Contemporaneously acquired T1 - and T2 -weighted image datasets were analyzed using a novel and fully automated algorithm involving image filtering, landmark identification, and explicit location of the pectoral muscle boundary. Within the region found, fat-water discrimination was performed using an Expectation Maximization-Markov Random Field technique, yielding a second independent estimate of MRI density. Images are presented for two individual women, demonstrating how the difficulty of the problem is highly subject-specific. Dice and Jaccard coefficients comparing the semiautomated BC-FCM method, operating on Dixon source data, with expert manual segmentation are presented. The corresponding results for the method based on T1 - and T2 -weighted data are slightly lower in the individual cases shown, but scatter plots and interclass correlations for the cohort as a whole show that both methods do an excellent job in segmenting and classifying breast tissue. Epidemiological results

  13. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W; D' Souza, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational method so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was

  14. a Comparison of Tree Segmentation Methods Using Very High Density Airborne Laser Scanner Data

    Science.gov (United States)

    Pirotti, F.; Kobal, M.; Roussel, J. R.

    2017-09-01

    Developments of LiDAR technology are decreasing the unit cost per single point (e.g. single-photo counting). This brings to the possibility of future LiDAR datasets having very dense point clouds. In this work, we process a very dense point cloud ( 200 points per square meter), using three different methods for segmenting single trees and extracting tree positions and other metrics of interest in forestry, such as tree height distribution and canopy area distribution. The three algorithms are tested at decreasing densities, up to a lowest density of 5 point per square meter. Accuracy assessment is done using Kappa, recall, precision and F-Score metrics comparing results with tree positions from groundtruth measurements in six ground plots where tree positions and heights were surveyed manually. Results show that one method provides better Kappa and recall accuracy results for all cases, and that different point densities, in the range used in this study, do not affect accuracy significantly. Processing time is also considered; the method with better accuracy is several times slower than the other two methods and increases exponentially with point density. Best performer gave Kappa = 0.7. The implications of metrics for determining the accuracy of results of point positions' detection is reported. Motives for the different performances of the three methods is discussed and further research direction is proposed.

  15. Different methods of image segmentation in the process of meat marbling evaluation

    Science.gov (United States)

    Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.

    2015-07-01

    The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.

  16. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  17. Magnetic field analysis of Lorentz motors using a novel segmented magnetic equivalent circuit method.

    Science.gov (United States)

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-28

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results.

  18. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Li

    2013-01-01

    Full Text Available A simple and accurate method based on the magnetic equivalent circuit (MEC model is proposed in this paper to predict magnetic flux density (MFD distribution of the air-gap in a Lorentz motor (LM. In conventional MEC methods, the permanent magnet (PM is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner. Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results.

  19. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    Science.gov (United States)

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  20. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    Science.gov (United States)

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A fully automated level-set based segmentation method of thoracic and lumbar vertebral bodies in Computed Tomography images.

    Science.gov (United States)

    Ruiz-España, Silvia; Díaz-Parra, Antonio; Arana, Estanislao; Moratal, David

    2015-01-01

    Spine is a structure commonly involved in several diseases. Identification and segmentation of the vertebral structures are of relevance to many medical applications related to the spine such as diagnosis, therapy or surgical intervention. However, the development of automatic and reliable methods are an unmet need. This work presents a fully automatic segmentation method of thoracic and lumbar vertebral bodies from Computed Tomography images. The procedure can be divided into four main stages: firstly, seed points were detected in the spinal canal in order to generate initial contours in the segmentation process, automating the whole process. Secondly, a processing step is performed to improve image quality. Third step was to carry out the segmentation using the Selective Binary Gaussian Filtering Regularized Level Set method and, finally, two morphological operations were applied in order to refine the segmentation result. The method was tested in clinical data coming from 10 trauma patients. To evaluate the result the average value of the DICE coefficient was calculated, obtaining a 90.86 ± 1.87% in the whole spine (thoracic and lumbar regions), a 86.08 ± 1.73% in the thoracic region and a 95,61 ±2,25% in the lumbar region. The results are highly competitive when compared to the results obtained in previous methods, especially for the lumbar region.

  2. Novel and powerful 3D adaptive crisp active contour method applied in the segmentation of CT lung images.

    Science.gov (United States)

    Rebouças Filho, Pedro Pedrosa; Cortez, Paulo César; da Silva Barros, Antônio C; C Albuquerque, Victor Hugo; R S Tavares, João Manuel

    2017-01-01

    The World Health Organization estimates that 300 million people have asthma, 210 million people have Chronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third major cause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonology to address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentation defines the regions of the lungs in CT images of the thorax that must be further analyzed by the system or by a specialist physician. This work proposes a novel and powerful technique named 3D Adaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The method starts with a sphere within the lung to be segmented that is deformed by forces acting on it towards the lung borders. This process is performed iteratively in order to minimize an energy function associated with the 3D deformable model used. In the experimental assessment, the 3D ACACM is compared against three approaches commonly used in this field: the automatic 3D Region Growing, the level-set algorithm based on coherent propagation and the semi-automatic segmentation by an expert using the 3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measure of 99.22%, revealing its superiority and competency to segment lungs in CT images. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Segmentation of Gait Sequences in Sensor-Based Movement Analysis: A Comparison of Methods in Parkinson's Disease.

    Science.gov (United States)

    Haji Ghassemi, Nooshin; Hannink, Julius; Martindale, Christine F; Gaßner, Heiko; Müller, Meinard; Klucken, Jochen; Eskofier, Björn M

    2018-01-06

    Robust gait segmentation is the basis for mobile gait analysis. A range of methods have been applied and evaluated for gait segmentation of healthy and pathological gait bouts. However, a unified evaluation of gait segmentation methods in Parkinson's disease (PD) is missing. In this paper, we compare four prevalent gait segmentation methods in order to reveal their strengths and drawbacks in gait processing. We considered peak detection from event-based methods, two variations of dynamic time warping from template matching methods, and hierarchical hidden Markov models (hHMMs) from machine learning methods. To evaluate the methods, we included two supervised and instrumented gait tests that are widely used in the examination of Parkinsonian gait. In the first experiment, a sequence of strides from instructed straight walks was measured from 10 PD patients. In the second experiment, a more heterogeneous assessment paradigm was used from an additional 34 PD patients, including straight walks and turning strides as well as non-stride movements. The goal of the latter experiment was to evaluate the methods in challenging situations including turning strides and non-stride movements. Results showed no significant difference between the methods for the first scenario, in which all methods achieved an almost 100% accuracy in terms of F-score. Hence, we concluded that in the case of a predefined and homogeneous sequence of strides, all methods can be applied equally. However, in the second experiment the difference between methods became evident, with the hHMM obtaining a 96% F-score and significantly outperforming the other methods. The hHMM also proved promising in distinguishing between strides and non-stride movements, which is critical for clinical gait analysis. Our results indicate that both the instrumented test procedure and the required stride segmentation algorithm have to be selected adequately in order to support and complement classical clinical examination

  4. Segmentation of Gait Sequences in Sensor-Based Movement Analysis: A Comparison of Methods in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Nooshin Haji Ghassemi

    2018-01-01

    Full Text Available Robust gait segmentation is the basis for mobile gait analysis. A range of methods have been applied and evaluated for gait segmentation of healthy and pathological gait bouts. However, a unified evaluation of gait segmentation methods in Parkinson’s disease (PD is missing. In this paper, we compare four prevalent gait segmentation methods in order to reveal their strengths and drawbacks in gait processing. We considered peak detection from event-based methods, two variations of dynamic time warping from template matching methods, and hierarchical hidden Markov models (hHMMs from machine learning methods. To evaluate the methods, we included two supervised and instrumented gait tests that are widely used in the examination of Parkinsonian gait. In the first experiment, a sequence of strides from instructed straight walks was measured from 10 PD patients. In the second experiment, a more heterogeneous assessment paradigm was used from an additional 34 PD patients, including straight walks and turning strides as well as non-stride movements. The goal of the latter experiment was to evaluate the methods in challenging situations including turning strides and non-stride movements. Results showed no significant difference between the methods for the first scenario, in which all methods achieved an almost 100% accuracy in terms of F-score. Hence, we concluded that in the case of a predefined and homogeneous sequence of strides, all methods can be applied equally. However, in the second experiment the difference between methods became evident, with the hHMM obtaining a 96% F-score and significantly outperforming the other methods. The hHMM also proved promising in distinguishing between strides and non-stride movements, which is critical for clinical gait analysis. Our results indicate that both the instrumented test procedure and the required stride segmentation algorithm have to be selected adequately in order to support and complement classical

  5. New Fully Automated Method for Segmentation of Breast Lesions on Ultrasound Based on Texture Analysis.

    Science.gov (United States)

    Gómez-Flores, Wilfrido; Ruiz-Ortega, Bedert Abel

    2016-07-01

    The study described here explored a fully automatic segmentation approach based on texture analysis for breast lesions on ultrasound images. The proposed method involves two main stages: (i) In lesion region detection, the original gray-scale image is transformed into a texture domain based on log-Gabor filters. Local texture patterns are then extracted from overlapping lattices that are further classified by a linear discriminant analysis classifier to distinguish between the "normal tissue" and "breast lesion" classes. Next, an incremental method based on the average radial derivative function reveals the region with the highest probability of being a lesion. (ii) In lesion delineation, using the detected region and the pre-processed ultrasound image, an iterative thresholding procedure based on the average radial derivative function is performed to determine the final lesion contour. The experiments are carried out on a data set of 544 breast ultrasound images (including cysts, benign solid masses and malignant lesions) acquired with three distinct ultrasound machines. In terms of the area under the receiver operating characteristic curve, the one-way analysis of variance test (α=0.05) indicates that the proposed approach significantly outperforms two published fully automatic methods (pbreast lesions. In addition, the proposed approach can potentially be used for automated computer diagnosis purposes to assist physicians in detection and classification of breast masses. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  6. Fingerprint segmentation: an investigation of various techniques and a parameter study of a variance-based method

    CSIR Research Space (South Africa)

    Msiza, IS

    2011-09-01

    Full Text Available fingerprint segmentation approaches, this manuscript focuses on a block-wise method that is based on the gray-level variance of the image. Because the method of interest is subjected to a number of variable parameters, this document then presents a formal...

  7. Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database

    DEFF Research Database (Denmark)

    van Ginneken, Bram; Stegmann, Mikkel Bille; Loog, Marco

    2006-01-01

    classification method that employs a multi-scale filter bank of Gaussian derivatives and a k-nearest-neighbors classifier. The methods have been tested on a publicly available database of 247 chest radiographs, in which all objects have been manually segmented by two human observers. A parameter optimization...

  8. An improved parallel fuzzy connected image segmentation method based on CUDA.

    Science.gov (United States)

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  9. Threshold-free method for three-dimensional segmentation of organelles

    Science.gov (United States)

    Chan, Yee-Hung M.; Marshall, Wallace F.

    2012-03-01

    An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.

  10. An automated method for segmenting white matter lesions through multi-level morphometric feature classification with application to lupus

    Directory of Open Access Journals (Sweden)

    Mark Scully

    2010-04-01

    Full Text Available We demonstrate an automated, multi-level method to segment white matter brain lesions and apply it to lupus. The method makes use of local morphometric features based on multiple MR sequences, including T1-weighted, T2-weighted, and Fluid Attenuated Inversion Recovery. After preprocessing, including co-registration, brain extraction, bias correction, and intensity standardization, 49 features are calculated for each brain voxel based on local morphometry. At each level of segmentation a supervised classifier takes advantage of a different subset of the features to conservatively segment lesion voxels, passing on more difficult voxels to the next classifier. This multi-level approach allows for a fast lesion classification method with tunable trade-offs between sensitivity and specificity producing accuracy comparable to a human rater.

  11. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    NARCIS (Netherlands)

    Weijers, G.; Starke, A.; Haudum, A.; Thijssen, J.M.; Rehage, J.; Korte, C.L. de

    2010-01-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty

  12. An intelligent interactive segmentation method for the joint space in osteoarthritic ankles

    NARCIS (Netherlands)

    Olabarriaga, S.D.; Smeulders, A.W.M.; Marijnissen, A.C.A.; Vincken, K.L.; Kuba, A.; Šámal, M.; Todd-Pokropek, A.

    1999-01-01

    Clinical reality is full of complex images that cannot be segmented automatically with current computer vision technology, requiring intensive user intervention. In [1] and [2] we proposed a framework for the systematic development of intelligent interactive segmentation techniques that aim at

  13. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Directory of Open Access Journals (Sweden)

    Yaser Afshar

    Full Text Available Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10 pixels, but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  14. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    Science.gov (United States)

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  15. Surface-driven registration method for the structure-informed segmentation of diffusion MR images.

    Science.gov (United States)

    Esteban, Oscar; Zosso, Dominique; Daducci, Alessandro; Bach-Cuadra, Meritxell; Ledesma-Carbayo, María J; Thiran, Jean-Philippe; Santos, Andres

    2016-10-01

    Current methods for processing diffusion MRI (dMRI) to map the connectivity of the human brain require precise delineations of anatomical structures. This requirement has been approached by either segmenting the data in native dMRI space or mapping the structural information from T1-weighted (T1w) images. The characteristic features of diffusion data in terms of signal-to-noise ratio, resolution, as well as the geometrical distortions caused by the inhomogeneity of magnetic susceptibility across tissues hinder both solutions. Unifying the two approaches, we propose regseg, a surface-to-volume nonlinear registration method that segments homogeneous regions within multivariate images by mapping a set of nested reference-surfaces. Accurate surfaces are extracted from a T1w image of the subject, using as target image the bivariate volume comprehending the fractional anisotropy (FA) and the apparent diffusion coefficient (ADC) maps derived from the dMRI dataset. We first verify the accuracy of regseg on a general context using digital phantoms distorted with synthetic and random deformations. Then we establish an evaluation framework using undistorted dMRI data from the Human Connectome Project (HCP) and realistic deformations derived from the inhomogeneity fieldmap corresponding to each subject. We analyze the performance of regseg computing the misregistration error of the surfaces estimated after being mapped with regseg onto 16 datasets from the HCP. The distribution of errors shows a 95% CI of 0.56-0.66mm, that is below the dMRI resolution (1.25mm, isotropic). Finally, we cross-compare the proposed tool against a nonlinear b0-to-T2w registration method, thereby obtaining a significantly lower misregistration error with regseg. The accurate mapping of structural information in dMRI space is fundamental to increase the reliability of network building in connectivity analyses, and to improve the performance of the emerging structure-informed techniques for dMRI data

  16. Wavelet Based Demodulation of Vibration Signals Generated by Defects in Rolling Element Bearings

    National Research Council Canada - National Science Library

    Yiakopoulos, C.T; Antoniadis, I.A

    2002-01-01

    .... The envelope detection or demodulation methods have been established as the dominant analysis methods for this purpose, since they can separate the useful part of the signal from its redundant contents...

  17. An optimization-based method for prediction of lumbar spine segmental kinematics from the measurements of thorax and pelvic kinematics.

    Science.gov (United States)

    Shojaei, I; Arjmand, N; Bazrgari, B

    2015-12-01

    Given measurement difficulties, earlier modeling studies have often used some constant ratios to predict lumbar segmental kinematics from measurements of total lumbar kinematics. Recent imaging studies suggested distribution of lumbar kinematics across its vertebrae changes with trunk rotation, lumbar posture, and presence of load. An optimization-based method is presented and validated in this study to predict segmental kinematics from measured total lumbar kinematics. Specifically, a kinematics-driven biomechanical model of the spine is used in a heuristic optimization procedure to obtain a set of segmental kinematics that, when prescribed to the model, were associated with the minimum value for the sum of squared predicted muscle stresses across all the lower back muscles. Furthermore, spinal loads estimated using the predicted kinematics by the present method were compared with those estimated using constant ratios. Predicted segmental kinematics were in good agreement with those obtained by imaging with an average error of ~10%. Compared with those obtained using constant ratios, predicted spinal loads using segmental kinematics obtained here were in general smaller. In conclusion, the proposed method offers an alternative tool for improving model-based estimates of spinal loads where image-based measurement of lumbar kinematics is not feasible. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Wavelet-based artifact identification and separation technique for EEG signals during galvanic vestibular stimulation.

    Science.gov (United States)

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of -1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters.

  19. Influence of inverse dynamics methods on the calculation of inter-segmental moments in vertical jumping and weightlifting

    Directory of Open Access Journals (Sweden)

    Cleather Daniel J

    2010-11-01

    Full Text Available Abstract Background A vast number of biomechanical studies have employed inverse dynamics methods to calculate inter-segmental moments during movement. Although all inverse dynamics methods are rooted in classical mechanics and thus theoretically the same, there exist a number of distinct computational methods. Recent research has demonstrated a key influence of the dynamics computation of the inverse dynamics method on the calculated moments, despite the theoretical equivalence of the methods. The purpose of this study was therefore to explore the influence of the choice of inverse dynamics on the calculation of inter-segmental moments. Methods An inverse dynamics analysis was performed to analyse vertical jumping and weightlifting movements using two distinct methods. The first method was the traditional inverse dynamics approach, in this study characterized as the 3 step method, where inter-segmental moments were calculated in the local coordinate system of each segment, thus requiring multiple coordinate system transformations. The second method (the 1 step method was the recently proposed approach based on wrench notation that allows all calculations to be performed in the global coordinate system. In order to best compare the effect of the inverse dynamics computation a number of the key assumptions and methods were harmonized, in particular unit quaternions were used to parameterize rotation in both methods in order to standardize the kinematics. Results Mean peak inter-segmental moments calculated by the two methods were found to agree to 2 decimal places in all cases and were not significantly different (p > 0.05. Equally the normalized dispersions of the two methods were small. Conclusions In contrast to previously documented research the difference between the two methods was found to be negligible. This study demonstrates that the 1 and 3 step method are computationally equivalent and can thus be used interchangeably in

  20. Omega-3 chicken egg detection system using a mobile-based image processing segmentation method

    Science.gov (United States)

    Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.

    2017-02-01

    An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.

  1. Influence of inverse dynamics methods on the calculation of inter-segmental moments in vertical jumping and weightlifting.

    Science.gov (United States)

    Cleather, Daniel J; Bull, Anthony M J

    2010-11-17

    A vast number of biomechanical studies have employed inverse dynamics methods to calculate inter-segmental moments during movement. Although all inverse dynamics methods are rooted in classical mechanics and thus theoretically the same, there exist a number of distinct computational methods. Recent research has demonstrated a key influence of the dynamics computation of the inverse dynamics method on the calculated moments, despite the theoretical equivalence of the methods. The purpose of this study was therefore to explore the influence of the choice of inverse dynamics on the calculation of inter-segmental moments. An inverse dynamics analysis was performed to analyse vertical jumping and weightlifting movements using two distinct methods. The first method was the traditional inverse dynamics approach, in this study characterized as the 3 step method, where inter-segmental moments were calculated in the local coordinate system of each segment, thus requiring multiple coordinate system transformations. The second method (the 1 step method) was the recently proposed approach based on wrench notation that allows all calculations to be performed in the global coordinate system. In order to best compare the effect of the inverse dynamics computation a number of the key assumptions and methods were harmonized, in particular unit quaternions were used to parameterize rotation in both methods in order to standardize the kinematics. Mean peak inter-segmental moments calculated by the two methods were found to agree to 2 decimal places in all cases and were not significantly different (p > 0.05). Equally the normalized dispersions of the two methods were small. In contrast to previously documented research the difference between the two methods was found to be negligible. This study demonstrates that the 1 and 3 step method are computationally equivalent and can thus be used interchangeably in musculoskeletal modelling technology. It is important that future work

  2. Research on simulation calculation method of biomechanical characteristics of C1-3 motion segment damage mechanism

    Directory of Open Access Journals (Sweden)

    HUANG Ju-ying

    2013-11-01

    Full Text Available Objective To develop the finite element model (FEM of cervical spinal C1-3 motion segment, and to make biomechanical finite element analysis (FEA on C1-3 motion segment and thus simulate the biomechanical characteristics of C1-3 motion segment in distraction violence, compression violence, hyperextension violence and hyperflexion violence. Methods According to CT radiological data of a healthy adult, the vertebrae and intervertebral discs of cervical spinal C1-3 motion segment were respectively reconstructed by Mimics 10.01 software and Geomagic 10.0 software. The FEM of C1-3 motion segment was reconstructed by attaching the corresponding material properties of cervical spine in Ansys software. The biomechanical characteristics of cervical spinal C1-3 motion segment model were simulated under the 4 loadings of distraction violence, compression violence, hyperextension violence and hyperflexion violence by finite element method. Results In the loading of longitudinal stretch, the stress was relatively concentrated in the anterior arch of atlas, atlantoaxial joint and C3 lamina and spinous process. In the longitudinal compressive loads, the maximum stress of the upper cervical spine was located in the anterior arch of atlas. In the loading of hyperextension moment, the stress was larger in the massa lateralis atlantis, the lateral and posterior arch junction of atlas, the posterior arch nodules of the atlas, superior articular surface of axis and C2 isthmus. In the loading of hyperflexion moment, the stress was relatively concentrated in the odontoid process of axis, the posterior arch of atlas, the posterior arch nodules of atlas, C2 isthmic and C2 inferior articular process. Conclusion Finite element biomechanical testing of C1-3 motion segment can predict the biomechanical mechanism of upper cervical spine injury.

  3. Wavelet Based Demodulation of Vibration Signals Generated by Defects in Rolling Element Bearings

    OpenAIRE

    Yiakopoulos, C.T.; Antoniadis, I.A.

    2002-01-01

    Vibration signals resulting from roller bearing defects, present a rich content of physical information, the appropriate analysis of which can lead to the clear identification of the nature of the fault. The envelope detection or demodulation methods have been established as the dominant analysis methods for this purpose, since they can separate the useful part of the signal from its redundant contents. The paper proposes a new effective demodulation method, based on the wavelet transform. Th...

  4. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    Science.gov (United States)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  5. Automated segmentation and isolation of touching cell nuclei in cytopathology smear images of pleural effusion using distance transform watershed method

    Science.gov (United States)

    Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko

    2017-06-01

    The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.

  6. Proposing Wavelet-Based Low-Pass Filter and Input Filter to Improve Transient Response of Grid-Connected Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Bijan Rahmani

    2016-08-01

    Full Text Available Available photovoltaic (PV systems show a prolonged transient response, when integrated into the power grid via active filters. On one hand, the conventional low-pass filter, employed within the integrated PV system, works with a large delay, particularly in the presence of system’s low-order harmonics. On the other hand, the switching of the DC (direct current–DC converters within PV units also prolongs the transient response of an integrated system, injecting harmonics and distortion through the PV-end current. This paper initially develops a wavelet-based low-pass filter to improve the transient response of the interconnected PV systems to grid lines. Further, a damped input filter is proposed within the PV system to address the raised converter’s switching issue. Finally, Matlab/Simulink simulations validate the effectiveness of the proposed wavelet-based low-pass filter and damped input filter within an integrated PV system.

  7. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  8. Hippocampal subfields at ultra high field MRI: An overview of segmentation and measurement methods

    Science.gov (United States)

    Giuliano, Alessia; Donatelli, Graziella; Cosottini, Mirco; Tosetti, Michela; Fantacci, Maria Evelina

    2017-01-01

    ABSTRACT The hippocampus is one of the most interesting and studied brain regions because of its involvement in memory functions and its vulnerability in pathological conditions, such as neurodegenerative processes. In the recent years, the increasing availability of Magnetic Resonance Imaging (MRI) scanners that operate at ultra‐high field (UHF), that is, with static magnetic field strength ≥7T, has opened new research perspectives. Compared to conventional high‐field scanners, these systems can provide new contrasts, increased signal‐to‐noise ratio and higher spatial resolution, thus they may improve the visualization of very small structures of the brain, such as the hippocampal subfields. Studying the morphometry of the hippocampus is crucial in neuroimaging research because changes in volume and thickness of hippocampal subregions may be relevant in the early assessment of pathological cognitive decline and Alzheimer's Disease (AD). The present review provides an overview of the manual, semi‐automated and fully automated methods that allow the assessment of hippocampal subfield morphometry at UHF MRI, focusing on the different hippocampal segmentation produced. © 2017 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:28188659

  9. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  10. Wavelet Based Demodulation of Vibration Signals Generated by Defects in Rolling Element Bearings

    Directory of Open Access Journals (Sweden)

    C.T. Yiakopoulos

    2002-01-01

    Full Text Available Vibration signals resulting from roller bearing defects, present a rich content of physical information, the appropriate analysis of which can lead to the clear identification of the nature of the fault. The envelope detection or demodulation methods have been established as the dominant analysis methods for this purpose, since they can separate the useful part of the signal from its redundant contents. The paper proposes a new effective demodulation method, based on the wavelet transform. The method fully exploits the underlying physical concepts of the modulation mechanism, present in the vibration response of faulty bearings, using the excellent time-frequency localization properties of the wavelet analysis. The choice of the specific wavelet family is marginal to their overall effect, while the necessary number of wavelet levels is quite limited. Experimental results and industrial measurements for three different types of bearing faults confirm the validity of the overall approach.

  11. Geomorphic Segmentation, Hydraulic Geometry, and Hydraulic Microhabitats of the Niobrara River, Nebraska - Methods and Initial Results

    Science.gov (United States)

    Alexander, Jason S.; Zelt, Ronald B.; Schaepe, Nathaniel J.

    2009-01-01

    The Niobrara River of Nebraska is a geologically, ecologically, and economically significant resource. The State of Nebraska has recognized the need to better manage the surface- and ground-water resources of the Niobrara River so they are sustainable in the long term. In cooperation with the Nebraska Game and Parks Commission, the U.S. Geological Survey is investigating the hydrogeomorphic settings and hydraulic geometry of the Niobrara River to assist in characterizing the types of broad-scale physical habitat attributes that may be of importance to the ecological resources of the river system. This report includes an inventory of surface-water and ground-water hydrology data, surface water-quality data, a longitudinal geomorphic segmentation and characterization of the main channel and its valley, and hydraulic geometry relations for the 330-mile section of the Niobrara River from Dunlap Diversion Dam in western Nebraska to the Missouri River confluence. Hydraulic microhabitats also were analyzed using available data from discharge measurements to demonstrate the potential application of these data and analysis methods. The main channel of the Niobrara was partitioned into three distinct fluvial geomorphic provinces: an upper province characterized by open valleys and a sinuous, equiwidth channel; a central province characterized by mixed valley and channel settings, including several entrenched canyon reaches; and a lower province where the valley is wide, yet restricted, but the river also is wide and persistently braided. Within the three fluvial geomorphic provinces, 36 geomorphic segments were identified using a customized, process-orientated classification scheme, which described the basic physical characteristics of the Niobrara River and its valley. Analysis of the longitudinal slope characteristics indicated that the Niobrara River longitudinal profile may be largely bedrock-controlled, with slope inflections co-located at changes in bedrock type at

  12. Spatial dependence of predictions from image segmentation: a methods to determine appropriate scales for producing land-management information

    Science.gov (United States)

    A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...

  13. Health Assessment of Cooling Fan Bearings Using Wavelet-Based Filtering

    Directory of Open Access Journals (Sweden)

    Qiang Miao

    2012-12-01

    Full Text Available As commonly used forced convection air cooling devices in electronics, cooling fans are crucial for guaranteeing the reliability of electronic systems. In a cooling fan assembly, fan bearing failure is a major failure mode that causes excessive vibration, noise, reduction in rotation speed, locked rotor, failure to start, and other problems; therefore, it is necessary to conduct research on the health assessment of cooling fan bearings. This paper presents a vibration-based fan bearing health evaluation method using comblet filtering and exponentially weighted moving average. A new health condition indicator (HCI for fan bearing degradation assessment is proposed. In order to collect the vibration data for validation of the proposed method, a cooling fan accelerated life test was conducted to simulate the lubricant starvation of fan bearings. A comparison between the proposed method and methods in previous studies (i.e., root mean square, kurtosis, and fault growth parameter was carried out to assess the performance of the HCI. The analysis results suggest that the HCI can identify incipient fan bearing failures and describe the bearing degradation process. Overall, the work presented in this paper provides a promising method for fan bearing health evaluation and prognosis.

  14. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  15. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges......). In this paper, we present a new algorithm for image noise reduction based on the combination of complex diffusion process and wavelet thresholding. In the existing wavelet thresholding methods, the noise reduction is limited, because the approximate coefficients containing the main information of the image...... are kept unchanged. Since noise affects both the approximate and detail coefficients, the proposed algorithm for noise reduction applies the complex diffusion process on the approximation band in order to alleviate the deficiency of the existing wavelet thresholding methods. The algorithm has been examined...

  16. A Blind High-Capacity Wavelet-Based Steganography Technique for Hiding Images into other Images

    Directory of Open Access Journals (Sweden)

    HAMAD, S.

    2014-05-01

    Full Text Available The flourishing field of Steganography is providing effective techniques to hide data into different types of digital media. In this paper, a novel technique is proposed to hide large amounts of image data into true colored images. The proposed method employs wavelet transforms to decompose images in a way similar to the Human Visual System (HVS for more secure and effective data hiding. The designed model can blindly extract the embedded message without the need to refer to the original cover image. Experimental results showed that the proposed method outperformed all of the existing techniques not only imperceptibility but also in terms of capacity. In fact, the proposed technique showed an outstanding performance on hiding a secret image whose size equals 100% of the cover image while maintaining excellent visual quality of the resultant stego-images.

  17. A wavelet based algorithm for the identification of oscillatory event-related potential components.

    Science.gov (United States)

    Aniyan, Arun Kumar; Philip, Ninan Sajeeth; Samar, Vincent J; Desjardins, James A; Segalowitz, Sidney J

    2014-08-15

    Event related potentials (ERPs) are very feeble alterations in the ongoing electroencephalogram (EEG) and their detection is a challenging problem. Based on the unique time-based parameters derived from wavelet coefficients and the asymmetry property of wavelets a novel algorithm to separate ERP components in single-trial EEG data is described. Though illustrated as a specific application to N170 ERP detection, the algorithm is a generalized approach that can be easily adapted to isolate different kinds of ERP components. The algorithm detected the N170 ERP component with a high level of accuracy. We demonstrate that the asymmetry method is more accurate than the matching wavelet algorithm and t-CWT method by 48.67 and 8.03 percent, respectively. This paper provides an off-line demonstration of the algorithm and considers issues related to the extension of the algorithm to real-time applications. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation

    Energy Technology Data Exchange (ETDEWEB)

    Mirkin, B. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory.

  19. Airway Segmentation and Centerline Extraction from Thoracic CT – Comparison of a New Method to State of the Art Commercialized Methods

    Science.gov (United States)

    Reynisson, Pall Jens; Scali, Marta; Smistad, Erik; Hofstad, Erlend Fagertun; Leira, Håkon Olav; Lindseth, Frank; Nagelhus Hernes, Toril Anita; Amundsen, Tore; Sorger, Hanne; Langø, Thomas

    2015-01-01

    Introduction Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface. Method CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium), which provides a Basic module and a Pulmonology module (beta version) (MPM), OsiriX (Pixmeo, Geneva, Switzerland) and our Tube Segmentation Framework (TSF) method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model. Results The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM) and the Mimics Basic Module (MBM) resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the

  20. Intelligent Sensing in Inverter-fed Induction Motors: Wavelet-based Symbolic Dynamic Analysis

    Directory of Open Access Journals (Sweden)

    Rohan SAMSI

    2008-07-01

    Full Text Available Wavelet transform allows adaptive usage of windows to extract pertinent information from sensor signals, and symbolic dynamic analysis provides coarse graining of the underlying information for enhanced computational speed and robustness of sensor-data-driven decision-making. These two concepts are synergistically combined for real-time intelligent sensing of faults whose signatures are small compared to coefficients of dominant frequencies in the signal. Feasibility of the proposed intelligent sensing method is demonstrated on an experimental apparatus for early detection of rotor bar breakage in an inverter-fed induction motor.

  1. NI-50SEGMENTATION OF METASTATIC LESIONS IN LARGE-SCALE REGISTRIES: COMPARISON OF EXPERT MANUAL SEGMENTATION VS. SEMI-AUTOMATED METHODS

    OpenAIRE

    LaMontagne, Pamela; Milchencko, Mikhail; Vélez, Maria; Abraham, Christopher; Marcus, Daniel; Robinson, Cliff; Fouke, Sarah

    2014-01-01

    To better understand the outcomes after stereotactic radiosurgery (SRS) for brain metastases, we have created a registry that archives MRI studies alongside clinical data in this population. To consider outcomes quantitatively, each metastatic lesion must be segmented to define a 3D volume. In large populations, lesion segmentation is time consuming (and expensive when this requires an experienced Radiation Oncologist or Neurosurgeon) to manually segment each lesion slice by slice. We sought ...

  2. Wavelet-based feature extraction for classification of epileptic seizure EEG signal.

    Science.gov (United States)

    Sharmila, A; Mahalakshmi, P

    2017-11-01

    Electroencephalogram (EEG) signal-processing techniques are the prominent role in the detection and prediction of epileptic seizures. The detection of epileptic activity is cumbersome and needs a detailed analysis of the EEG data. Therefore, an efficient method for classifying EEG data is required. In this work, a constructive pattern recognition strategy for analysing EEG data as normal and epileptic seizure has been proposed. With this strategy, the signals were decomposed into frequency sub-bands using discrete wavelet transform (DWT). principal component analysis (PCA) and linear discriminant analysis (LDA) are applied to reduce the dimensionality of EEG data. These reduced features were used as input to Naïve Bayes and K-Nearest Neighbour Classifier to classify normal or epileptic seizure signal. The performance of classifier was evaluated in terms of accuracy, sensitivity and specificity. The experimental results show that PCA with Naïve Bayes classifier provides 98.6% accuracy and LDA with Naïve Bayes classifier attains improved result of 99.8% accuracy. Also, the result shows that PCA, LDA with K-NN achieves 98.5% and 100% accuracy. This evaluation is used to propose a reliable, practical epilepsy detection method to enhance the patient's care and quality of life.

  3. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  4. Evaluation of PET texture features with heterogeneous phantoms: complementarity and effect of motion and segmentation method

    Science.gov (United States)

    Carles, M.; Torres-Espallardo, I.; Alberich-Bayarri, A.; Olivas, C.; Bello, P.; Nestle, U.; Martí-Bonmatí, L.

    2017-01-01

    A major source of error in quantitative PET/CT scans of lung cancer tumors is respiratory motion. Regarding the variability of PET texture features (TF), the impact of respiratory motion has not been properly studied with experimental phantoms. The primary aim of this work was to evaluate the current use of PET texture analysis for heterogeneity characterization in lesions affected by respiratory motion. Twenty-eight heterogeneous lesions were simulated by a mixture of alginate and 18 F-fluoro-2-deoxy-D-glucose (FDG). Sixteen respiratory patterns were applied. Firstly, the TF response for different heterogeneous phantoms and its robustness with respect to the segmentation method were calculated. Secondly, the variability for TF derived from PET image with (gated, G-) and without (ungated, U-) motion compensation was analyzed. Finally, TF complementarity was assessed. In the comparison of TF derived from the ideal contour with respect to TF derived from 40%-threshold and adaptive-threshold PET contours, 7/8 TF showed strong linear correlation (LC) (p    0.75), despite a significant volume underestimation. Independence of lesion movement (LC in 100% of the combined pairs of movements, p    0.9, p  PET/CT images. Despite inaccurate volume delineation, TF derived from 40% and COA contours could be reliable for their prognostic use. The TF that exhibited simultaneous added value and independence of lesion movement were ENG and ENT computed from the G-image. Their use is therefore recommended for heterogeneity quantification of lesions affected by respiratory motion.

  5. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2014-02-01

    Full Text Available In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user’s data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER, signal-to-noise ratio (SNR, compression ratio (CR, and compressed-signal to noise ratio (CNR methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.

  6. Wavelet-based EEG processing for computer-aided seizure detection and epilepsy diagnosis.

    Science.gov (United States)

    Faust, Oliver; Acharya, U Rajendra; Adeli, Hojjat; Adeli, Amir

    2015-03-01

    Electroencephalography (EEG) is an important tool for studying the human brain activity and epileptic processes in particular. EEG signals provide important information about epileptogenic networks that must be analyzed and understood before the initiation of therapeutic procedures. Very small variations in EEG signals depict a definite type of brain abnormality. The challenge is to design and develop signal processing algorithms which extract this subtle information and use it for diagnosis, monitoring and treatment of patients with epilepsy. This paper presents a review of wavelet techniques for computer-aided seizure detection and epilepsy diagnosis with an emphasis on research reported during the past decade. A multiparadigm approach based on the integration of wavelets, nonlinear dynamics and chaos theory, and neural networks advanced by Adeli and associates is the most effective method for automated EEG-based diagnosis of epilepsy. Copyright © 2015 British Epilepsy Association. All rights reserved.

  7. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.

    Science.gov (United States)

    Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.

  8. Online Semiparametric Identification of Lithium-Ion Batteries Using the Wavelet-Based Partially Linear Battery Model

    Directory of Open Access Journals (Sweden)

    Caiping Zhang

    2013-05-01

    Full Text Available Battery model identification is very important for reliable battery management as well as for battery system design process. The common problem in identifying battery models is how to determine the most appropriate mathematical model structure and parameterized coefficients based on the measured terminal voltage and current. This paper proposes a novel semiparametric approach using the wavelet-based partially linear battery model (PLBM and a recursive penalized wavelet estimator for online battery model identification. Three main contributions are presented. First, the semiparametric PLBM is proposed to simulate the battery dynamics. Compared with conventional electrical models of a battery, the proposed PLBM is equipped with a semiparametric partially linear structure, which includes a parametric part (involving the linear equivalent circuit parameters and a nonparametric part [involving the open-circuit voltage (OCV]. Thus, even with little prior knowledge about the OCV, the PLBM can be identified using a semiparametric identification framework. Second, we model the nonparametric part of the PLBM using the truncated wavelet multiresolution analysis (MRA expansion, which leads to a parsimonious model structure that is highly desirable for model identification; using this model, the PLBM could be represented in a linear-in-parameter manner. Finally, to exploit the sparsity of the wavelet MRA representation and allow for online implementation, a penalized wavelet estimator that uses a modified online cyclic coordinate descent algorithm is proposed to identify the PLBM in a recursive fashion. The simulation and experimental results demonstrate that the proposed PLBM with the corresponding identification algorithm can accurately simulate the dynamic behavior of a lithium-ion battery in the Federal Urban Driving Schedule tests.

  9. Alzheimer's Disease Diagnostic Performance of a Multi-Atlas Hippocampal Segmentation Method using the Harmonized Hippocampal Protocol

    DEFF Research Database (Denmark)

    Anker, Cecilie Benedicte; Sørensen, Lauge; Pai, Akshay

    registration, patch-based segmentation method (MRP) using 40 HHP segmentations in the atlas (12 NC, 11 MCI, 17 AD) was applied to segment the hippocampi. Static- and longitudinal FS (v5.1.0, default parameters) were also applied to segment the hippocampi. Atrophy rate calculated as percent volume change from...... baseline to month 12 was estimated for the three methods, and diagnostic performance was evaluated using the area under the receiver operating characteristic curve (AUC) of pairwise diagnostic group comparisons. RESULTS Mean (SD) atrophy rates were as follows (MRP / static FS / longitudinal FS): NC -0.......86 (2.46) / -1.39 (5.41) / -1.63 (2.54), MCI -2.38 (3.28) / -3.69 (5.48) / -3.25 (3.53), AD -4.23 (3.07) / -4.29 (5.32) / -4.83 (3.74). Diagnostic performances were as follows (AUC; MRP / static FS / longitudinal FS): NC vs. MCI 0.65 / 0.67 / 0.64, NC vs. AD 0.80 / 0.69 / 0.76, MCI vs. AD 0.66 / 0...

  10. Automated extraction and assessment of functional features of areal measured microstructures using a segmentation-based evaluation method

    Science.gov (United States)

    Hartmann, Wito; Loderer, Andreas

    2014-10-01

    In addition to currently available surface parameters, according to ISO 4287:2010 and ISO 25178-2:2012—which are defined particularly for stochastic surfaces—a universal evaluation procedure is provided for geometrical, well-defined, microstructured surfaces. Since several million of features (like diameters, depths, etc) are present on microstructured surfaces, segmentation techniques are used for the automation of the feature-based dimensional evaluation. By applying an additional extended 3D evaluation after the segmentation and classification procedure, the accuracy of the evaluation is improved compared to the direct evaluation of segments, and additional functional parameters can be derived. Advantages of the extended segmentation-based evaluation method include not only the ability to evaluate the manufacturing process statistically (e.g. by capability indices, according to ISO 21747:2007 and ISO 3534-2:2013) and to derive statistical reliable values for the correction of microstructuring processes but also the direct re-use of the evaluated parameter (including its statistical distribution) in simulations for the calculation of probabilities with respect to the functionality of the microstructured surface. The practical suitability of this method is demonstrated using examples of microstructures for the improvement of sliding and ink transfers for printing machines.

  11. Wavelet Based Analysis of Airborne Gravity Data For Interpretation of Geological Boundaries

    Science.gov (United States)

    Leblanc, George E.; Ferguson, Stephen

    Airborne gravimeters have only very recently been developed with the sensitivity necessary for useful exploration geophysics. In this study, an airborne gravimeter - an inertially-stabilized platform which converts accelerometer readings into gravity values - has been installed aboard the NRC's Convair 580 research aircraft and a survey performed over the Geological Survey of Canada's gravity test area. These data are used in a new wavelet transform methodology that quickly analyses and locates geological boundaries of various spatial extents within real aerogravity data. The raw aerogravity data were GPS corrected and then noise minimised - to reduce high frequency random noise - with a separate wavelet transform denoising algorithm. The multi-resolution nature of the wavelet transform was then used to investigate the presence of boundaries at various scales. Examination of each wavelet detail scale shows that there is a coherent and localizable signal that conforms to geological boundaries over the entire range of scales. However, the boundaries are more apparent in the lower wavelet scales (corresponding to higher frequencies). The location of the local maximum values of the wavelet coefficents on each wavelet level provides a means to quickly determine and evaluate regional and/or local boundaries. The boundaries that are determined as a function of wavelet scale are able to be well-localized with the wavelet transform, and provides a method to locate, in ground coordinates, the edges of the boundary. In this study it is clear that wavelet methodologies are very well suited to being used effectively with aerogravity data due to the non-stationary nature of these data. Using these same methods on the horizontal and vertical derivatives of the data can provide visually clearer boundary definition, however, thus far there has not been any new boundaries identified in the derivative data. It is also possible to draw potential structural information, such as general

  12. Wavelet-Based Methodology for Evolutionary Spectra Estimation of Nonstationary Typhoon Processes

    Directory of Open Access Journals (Sweden)

    Guang-Dong Zhou

    2015-01-01

    Full Text Available Closed-form expressions are proposed to estimate the evolutionary power spectral density (EPSD of nonstationary typhoon processes by employing the wavelet transform. Relying on the definition of the EPSD and the concept of the wavelet transform, wavelet coefficients of a nonstationary typhoon process at a certain time instant are interpreted as the Fourier transform of a new nonstationary oscillatory process, whose modulating function is equal to the modulating function of the nonstationary typhoon process multiplied by the wavelet function in time domain. Then, the EPSD of nonstationary typhoon processes is deduced in a closed form and is formulated as a weighted sum of the squared moduli of time-dependent wavelet functions. The weighted coefficients are frequency-dependent functions defined by the wavelet coefficients of the nonstationary typhoon process and the overlapping area of two shifted wavelets. Compared with the EPSD, defined by a sum of the squared moduli of the wavelets in frequency domain in literature, this paper provides an EPSD estimation method in time domain. The theoretical results are verified by uniformly modulated nonstationary typhoon processes and non-uniformly modulated nonstationary typhoon processes.

  13. Multidimensional Wavelet-based Regularized Reconstruction for Parallel Acquisition in Neuroimaging

    CERN Document Server

    Chaari, Lotfi; Badillo, Solveig; Pesquet, Jean-Christophe; Ciuciu, Philippe

    2012-01-01

    Parallel MRI is a fast imaging technique that enables the acquisition of highly resolved images in space or/and in time. The performance of parallel imaging strongly depends on the reconstruction algorithm, which can proceed either in the original k-space (GRAPPA, SMASH) or in the image domain (SENSE-like methods). To improve the performance of the widely used SENSE algorithm, 2D- or slice-specific regularization in the wavelet domain has been deeply investigated. In this paper, we extend this approach using 3D-wavelet representations in order to handle all slices together and address reconstruction artifacts which propagate across adjacent slices. The gain induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE: 3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal acquisition is considered. Another important extension accounts for temporal correlations that exist between successive scans in functional MRI (fMRI). In addition to the case of 2D+t acquisition schemes ad...

  14. A Discrete Wavelet Based Feature Extraction and Hybrid Classification Technique for Microarray Data Analysis

    Directory of Open Access Journals (Sweden)

    Jaison Bennet

    2014-01-01

    Full Text Available Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN, naive Bayes, and support vector machine (SVM. Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT and moving window technique (MWT is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.

  15. Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.

    Science.gov (United States)

    Garnavi, Rahil; Aldeen, Mohammad; Bailey, James

    2012-11-01

    This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.

  16. Atlas-based method for segmentation of cerebral vascular trees from phase-contrast magnetic resonance angiography

    Science.gov (United States)

    Passat, Nicolas; Ronse, Christian; Baruthio, Joseph; Armspach, Jean-Paul; Maillot, Claude; Jahn, Christine

    2004-05-01

    Phase-contrast magnetic resonance angiography (PC-MRA) can produce phase images which are 3-dimensional pictures of vascular structures. However, it also provides magnitude images, containing anatomical - but no vascular - data. Classically, algorithms dedicated to PC-MRA segmentation detect the cerebral vascular tree by only working on phase images. We propose here a new approach for segmentation of cerebral blood vessels in PC-MRA using both types of images. This approach is based on the hypothesis that a magnitude image contains anatomical information useful for vascular structures detection. That information can then be transposed from a normal case to any patient image by image registration. An atlas of the whole head has been developed in order to store such anatomical knowledge. It divides a magnitude image into several "vascular areas", each one having specific vessel properties. The atlas can be applied on any magnitude image of an entire or nearly entire head by deformable matching, thus helping to segment blood vessels from the associated phase image. The segmentation method used afterwards is composed of a topology-conserving region growing algorithm using adaptative threshold values depending on the current region of the atlas. This algorithm builds the arterial and venous trees by iteratively adding voxels which are selected according to their greyscale value and the variation of values in their neighborhood. The topology conservation is guaranteed by only selecting simple points during the growing process. The method has been performed on 15 PC-MRA's of the brain. The results have been validated using MIP and 3D surface rendering visualization; a comparison to other results obtained without an atlas proves that atlas-based methods are an effective way to optimize vascular segmentation strategies.

  17. A robust wavelet-based multi-lead Electrocardiogram delineation algorithm.

    Science.gov (United States)

    Ghaffari, A; Homaeinezhad, M R; Akraminia, M; Atarod, M; Daevaeiha, M

    2009-12-01

    A robust multi-lead ECG wave detection-delineation algorithm is developed in this study on the basis of discrete wavelet transform (DWT). By applying a new simple approach to a selected scale obtained from DWT, this method is capable of detecting QRS complex, P-wave and T-wave as well as determining parameters such as start time, end time, and wave sign (upward or downward). First, a window with a specific length is slid sample to sample on the selected scale and the curve length in each window is multiplied by the area under the absolute value of the curve. In the next step, a variable thresholding criterion is designed for the resulted signal. The presented algorithm is applied to various databases including MIT-BIH arrhythmia database, European ST-T Database, QT Database, CinC Challenge 2008 Database as well as high resolution Holter data of DAY Hospital. As a result, the average values of sensitivity and positive predictivity Se=99.84% and P+=99.80% were obtained for the detection of QRS complexes, with the average maximum delineation error of 13.7ms, 11.3ms and 14.0ms for P-wave, QRS complex and T-wave, respectively. The presented algorithm has considerable capability in cases of low signal-to-noise ratio, high baseline wander, and abnormal morphologies. Especially, the high capability of the algorithm in the detection of the critical points of the ECG signal, i.e. the beginning and end of T-wave and the end of the QRS complex was validated by cardiologists in DAY hospital and the maximum values of 16.4ms and 15.9ms were achieved as absolute offset error of localization, respectively.

  18. Wavelet based automated postural event detection and activity classification with single imu - biomed 2013.

    Science.gov (United States)

    Lockhart, Thurmon E; Soangra, Rahul; Zhang, Jian; Wu, Xuefan

    2013-01-01

    and classification algorithm using denoised signals from single wireless IMU placed at sternum. The algorithm was further validated and verified with motion capture system in laboratory environment. Wavelet denoising highlighted postural events and transition durations that further provided clinical information on postural control and motor coordination. The presented method can be applied in real life ambulatory monitoring approaches for assessing condition of elderly.

  19. Performance of five research-domain automated WM lesion segmentation methods in a multi-center MS study.

    Science.gov (United States)

    de Sitter, Alexandra; Steenwijk, Martijn D; Ruet, Aurélie; Versteeg, Adriaan; Liu, Yaou; van Schijndel, Ronald A; Pouwels, Petra J W; Kilsdonk, Iris D; Cover, Keith S; van Dijk, Bob W; Ropele, Stefan; Rocca, Maria A; Yiannakas, Marios; Wattjes, Mike P; Damangir, Soheil; Frisoni, Giovanni B; Sastre-Garriga, Jaume; Rovira, Alex; Enzinger, Christian; Filippi, Massimo; Frederiksen, Jette; Ciccarelli, Olga; Kappos, Ludwig; Barkhof, Frederik; Vrenken, Hugo

    2017-12-01

    In vivoidentification of white matter lesions plays a key-role in evaluation of patients with multiple sclerosis (MS). Automated lesion segmentation methods have been developed to substitute manual outlining, but evidence of their performance in multi-center investigations is lacking. In this work, five research-domain automated segmentation methods were evaluated using a multi-center MS dataset. 70 MS patients (median EDSS of 2.0 [range 0.0-6.5]) were included from a six-center dataset of the MAGNIMS Study Group (www.magnims.eu) which included 2D FLAIR and 3D T1 images with manual lesion segmentation as a reference. Automated lesion segmentations were produced using five algorithms: Cascade; Lesion Segmentation Toolbox (LST) with both the Lesion growth algorithm (LGA) and the Lesion prediction algorithm (LPA); Lesion-Topology preserving Anatomical Segmentation (Lesion-TOADS); and k-Nearest Neighbor with Tissue Type Priors (kNN-TTP). Main software parameters were optimized using a training set (N = 18), and formal testing was performed on the remaining patients (N = 52). To evaluate volumetric agreement with the reference segmentations, intraclass correlation coefficient (ICC) as well as mean difference in lesion volumes between the automated and reference segmentations were calculated. The Similarity Index (SI), False Positive (FP) volumes and False Negative (FN) volumes were used to examine spatial agreement. All analyses were repeated using a leave-one-center-out design to exclude the center of interest from the training phase to evaluate the performance of the method on 'unseen' center. Compared to the reference mean lesion volume (4.85 ± 7.29 mL), the methods displayed a mean difference of 1.60 ± 4.83 (Cascade), 2.31 ± 7.66 (LGA), 0.44 ± 4.68 (LPA), 1.76 ± 4.17 (Lesion-TOADS) and -1.39 ± 4.10 mL (kNN-TTP). The ICCs were 0.755, 0.713, 0.851, 0.806 and 0.723, respectively. Spatial agreement with reference segmentations was higher for LPA

  20. An Automated Image Analysis Method for Segmenting Fluorescent Bacteria in Three Dimensions.

    Science.gov (United States)

    Reyer, Matthew A; McLean, Eric L; Chennakesavalu, Shriram; Fei, Jingyi

    2018-01-16

    Single-cell fluorescence imaging is a powerful technique for studying inherently heterogeneous biological processes. To correlate a genotype or phenotype to a specific cell, images containing a population of cells must first be properly segmented. However, a proper segmentation with minimal user input becomes challenging when cells are clustered or overlapping in three dimensions. We introduce a new analysis package, Seg-3D, for the segmentation of bacterial cells in three-dimensional (3D) images, based on local thresholding, shape analysis, concavity-based cluster splitting, and morphology-based 3D reconstruction. The reconstructed cell volumes allow us to directly quantify the fluorescent signals from biomolecules of interest within individual cells. We demonstrate the application of this analysis package in 3D segmentation of individual bacterial pathogens invading host cells. We believe Seg-3D can be an efficient and simple program that can be used to analyze a wide variety of single-cell images, especially for biological systems involving random 3D orientation and clustering behavior, such as bacterial infection or colonization.

  1. CUDA Accelerated Multi-domain Volumetric Image Segmentation and Using a Higher Order Level Set Method

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François; Zhang, Qin

    2009-01-01

    -manding in terms of computation and memory space, we employ a CUDA based fast GPU segmentation and provide accuracy measures compared with an equivalent CPU implementation. Our resulting surfaces are C2-smooth resulting from tri-cubic spline interpolation algorithm. We also provide error bounds...

  2. Method of image segmentation using a neural network. Application to MR imaging of brain tumors. Une methode de segmentation par un reseau neuro-mimetique. Application a l'IRM de tumeurs cerebrales

    Energy Technology Data Exchange (ETDEWEB)

    Engler, E.; Gautherie, M. (Strasbourg-1 Univ., 67 (France))

    1992-01-01

    An original method of numerical images segmentation has been developed. This method is based on pixel clustering using a formal neural network configurated by supervised learning of pre-classified examples. The method has been applied to series of MR images of brain tumors (gliomas) with a view to proceed with a 3D-extraction of the tumor volume. This study is part of a project on cancer thermotherapy including the development of a scan-focused ultrasound system of tumor heating and a 3D-numerical thermal model.

  3. Clinical feasibility of a myocardial signal intensity threshold-based semi-automated cardiac magnetic resonance segmentation method

    Energy Technology Data Exchange (ETDEWEB)

    Varga-Szemes, Akos; Schoepf, U.J.; Suranyi, Pal; De Cecco, Carlo N.; Fox, Mary A. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Muscogiuri, Giuseppe [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Medical-Surgical Sciences and Translational Medicine, Rome (Italy); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Cannao, Paola M. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Milan, Scuola di Specializzazione in Radiodiagnostica, Milan (Italy); Renker, Matthias [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Kerckhoff Heart and Thorax Center, Bad Nauheim (Germany); Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Ruzsics, Balazs [Royal Liverpool and Broadgreen University Hospitals, Department of Cardiology, Liverpool (United Kingdom)

    2016-05-15

    To assess the accuracy and efficiency of a threshold-based, semi-automated cardiac MRI segmentation algorithm in comparison with conventional contour-based segmentation and aortic flow measurements. Short-axis cine images of 148 patients (55 ± 18 years, 81 men) were used to evaluate left ventricular (LV) volumes and mass (LVM) using conventional and threshold-based segmentations. Phase-contrast images were used to independently measure stroke volume (SV). LV parameters were evaluated by two independent readers. Evaluation times using the conventional and threshold-based methods were 8.4 ± 1.9 and 4.2 ± 1.3 min, respectively (P < 0.0001). LV parameters measured by the conventional and threshold-based methods, respectively, were end-diastolic volume (EDV) 146 ± 59 and 134 ± 53 ml; end-systolic volume (ESV) 64 ± 47 and 59 ± 46 ml; SV 82 ± 29 and 74 ± 28 ml (flow-based 74 ± 30 ml); ejection fraction (EF) 59 ± 16 and 58 ± 17 %; and LVM 141 ± 55 and 159 ± 58 g. Significant differences between the conventional and threshold-based methods were observed in EDV, ESV, and LVM measurements; SV from threshold-based and flow-based measurements were in agreement (P > 0.05) but were significantly different from conventional analysis (P < 0.05). Excellent inter-observer agreement was observed. Threshold-based LV segmentation provides improved accuracy and faster assessment compared to conventional contour-based methods. (orig.)

  4. A sparsity-based simplification method for segmentation of spectral-domain optical coherence tomography images

    Science.gov (United States)

    Meiniel, William; Gan, Yu; Olivo-Marin, Jean-Christophe; Angelini, Elsa

    2017-08-01

    Optical coherence tomography (OCT) has emerged as a promising image modality to characterize biological tissues. With axio-lateral resolutions at the micron-level, OCT images provide detailed morphological information and enable applications such as optical biopsy and virtual histology for clinical needs. Image enhancement is typically required for morphological segmentation, to improve boundary localization, rather than enrich detailed tissue information. We propose to formulate image enhancement as an image simplification task such that tissue layers are smoothed while contours are enhanced. For this purpose, we exploit a Total Variation sparsity-based image reconstruction, inspired by the Compressed Sensing (CS) theory, but specialized for images with structures arranged in layers. We demonstrate the potential of our approach on OCT human heart and retinal images for layers segmentation. We also compare our image enhancement capabilities to the state-of-the-art denoising techniques.

  5. Method of joining a vane cavity insert to a nozzle segment of a gas turbine

    Science.gov (United States)

    Burdgick, Steven Sebastian

    2002-01-01

    An insert containing apertures for impingement cooling a nozzle vane of a nozzle segment in a gas turbine is inserted into one end of the vane. The leading end of the insert is positioned slightly past a rib adjacent the opposite end of the vane through which the insert is inserted. The end of the insert is formed or swaged into conformance with the inner margin of the rib. The insert is then brazed or welded to the rib.

  6. Diffuse Interface Methods for Multiclass Segmentation of High-Dimensional Data

    Science.gov (United States)

    2014-03-04

    normalized sum (82.66%), naive Bayes (83.52%) and SVM (linear kernel) (85.82%). For these meth- ods, two thirds of the data was used for training, and one...contains a nonlinear curvature term. The diffuse interface description has been used successfully in image inpainting [5,6] and image segmentation [7...an undirected graph with vertices V and edges E. For each dataset, the vertices represent its building blocks; for example, in the case of an image

  7. An Improved Method for Detecting Auxin-induced Hydrogen Ion Efflux from Corn Coleoptile Segments.

    Science.gov (United States)

    Evans, M L; Vesper, M J

    1980-10-01

    Conditions necessary to detect maximal auxin-induced H(+) secretion using a macroelectrode have been investigated using corn coleoptile segments. Auxin-induced H(+) secretion is strongly dependent upon oxygenation or aeration when the tissue to volume ratio is high. Cuticle disruption or removal is also necessary to detect substantial auxin-induced H(+) secretion. The auxin-induced decrease in pH of the external medium is stronger when the hormone is applied to tissue in which the cuticle has been disrupted with an abrasive than when the hormone is applied to tissue from which the cuticle and epidermis have been removed by peeling. The lower detectable acidification of the external medium when using peeled segments appears to be due in part to the leakage of buffers into the medium and in part to the removal of the auxin-sensitive epidermal cells.The sensitivity of corn coleoptile segments to auxin, as measured by H(+) secretion, increases about 2-fold during the first 2 hours after excision. This change in apparent sensitivity to auxin as reflected by H(+) secretion is paralleled by a time-dependent change in the growth response to auxin. Under optimal conditions for detecting H(+) efflux (oxygenation, abrasion, hormone application 2 hours after excision), the latent period in auxin-induced H(+) efflux (about 7 or 8 minutes) is only half as great as the latent period in auxin-induced growth (about 18 to 20 minutes). These observations are consistent with the acid growth hypothesis of auxin action.

  8. Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd.

    Science.gov (United States)

    Irshad, H; Montaser-Kouhsari, L; Waltz, G; Bucur, O; Nowak, J A; Dong, F; Knoblauch, N W; Beck, A H

    2015-01-01

    The development of tools in computational pathology to assist physicians and biomedical scientists in the diagnosis of disease requires access to high-quality annotated images for algorithm learning and evaluation. Generating high-quality expert-derived annotations is time-consuming and expensive. We explore the use of crowdsourcing for rapidly obtaining annotations for two core tasks in com- putational pathology: nucleus detection and nucleus segmentation. We designed and implemented crowdsourcing experiments using the CrowdFlower platform, which provides access to a large set of labor channel partners that accesses and manages millions of contributors worldwide. We obtained annotations from four types of annotators and compared concordance across these groups. We obtained: crowdsourced annotations for nucleus detection and segmentation on a total of 810 images; annotations using automated methods on 810 images; annotations from research fellows for detection and segmentation on 477 and 455 images, respectively; and expert pathologist-derived annotations for detection and segmentation on 80 and 63 images, respectively. For the crowdsourced annotations, we evaluated performance across a range of contributor skill levels (1, 2, or 3). The crowdsourced annotations (4,860 images in total) were completed in only a fraction of the time and cost required for obtaining annotations using traditional methods. For the nucleus detection task, the research fellow-derived annotations showed the strongest concordance with the expert pathologist- derived annotations (F-M =93.68%), followed by the crowd-sourced contributor levels 1,2, and 3 and the automated method, which showed relatively similar performance (F-M = 87.84%, 88.49%, 87.26%, and 86.99%, respectively). For the nucleus segmentation task, the crowdsourced contributor level 3-derived annotations, research fellow-derived annotations, and automated method showed the strongest concordance with the expert pathologist

  9. LV wall segmentation using the variational level set method (LSM) with additional shape constraint for oedema quantification

    Science.gov (United States)

    Kadir, K.; Gao, H.; Payne, A.; Soraghan, J.; Berry, C.

    2012-10-01

    In this paper an automatic algorithm for the left ventricle (LV) wall segmentation and oedema quantification from T2-weighted cardiac magnetic resonance (CMR) images is presented. The extent of myocardial oedema delineates the ischaemic area-at-risk (AAR) after myocardial infarction (MI). Since AAR can be used to estimate the amount of salvageable myocardial post-MI, oedema imaging has potential clinical utility in the management of acute MI patients. This paper presents a new scheme based on the variational level set method (LSM) with additional shape constraint for the segmentation of T2-weighted CMR image. In our approach, shape information of the myocardial wall is utilized to introduce a shape feature of the myocardial wall into the variational level set formulation. The performance of the method is tested using real CMR images (12 patients) and the results of the automatic system are compared to manual segmentation. The mean perpendicular distances between the automatic and manual LV wall boundaries are in the range of 1-2 mm. Bland-Altman analysis on LV wall area indicates there is no consistent bias as a function of LV wall area, with a mean bias of -121 mm2 between individual investigator one (IV1) and LSM, and -122 mm2 between individual investigator two (IV2) and LSM when compared to two investigators. Furthermore, the oedema quantification demonstrates good correlation when compared to an expert with an average error of 9.3% for 69 slices of short axis CMR image from 12 patients.

  10. Comparison of two different segmentation methods on planar lung perfusion scan with reference to quantitative value on SPECT/CT

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Min Seok; Kang, Yeon Koo; Ha, Seung Gyun [Dept. of Nuclear Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); and others

    2017-06-15

    Until now, there was no single standardized regional segmentation method of planar lung perfusion scan. We compared planar scan based two segmentation methods, which are frequently used in the Society of Nuclear Medicine, with reference to the lung perfusion single photon emission computed tomography (SPECT)/computed tomography (CT) derived values in lung cancer patients. Fifty-five lung cancer patients (male:female, 37:18; age, 67.8 ± 10.7 years) were evaluated. The patients underwent planar scan and SPECT/CT after injection of technetium-99 m macroaggregated albumin (Tc-99 m-MAA). The % uptake and predicted postoperative percentage forced expiratory volume in 1 s (ppoFEV1%) derived from both posterior oblique (PO) and anterior posterior (AP) methods were compared with SPECT/CT derived parameters. Concordance analysis, paired comparison, reproducibility analysis and spearman correlation analysis were conducted. The % uptake derived from PO method showed higher concordance with SPECT/CT derived % uptake in every lobe compared to AP method. Both methods showed significantly different lobar distribution of % uptake compared to SPECT/CT. For the target region, ppoFEV1% measured from PO method showed higher concordance with SPECT/CT, but lower reproducibility compared to AP method. Preliminary data revealed that every method significantly correlated with actual postoperative FEV1%, with SPECT/CT showing the best correlation. The PO method derived values showed better concordance with SPECT/CT compared to the AP method. Both PO and AP methods showed significantly different lobar distribution compared to SPECT/CT. In clinical practice such difference according to different methods and lobes should be considered for more accurate postoperative lung function prediction.

  11. An Improved Method for Detecting Auxin-induced Hydrogen Ion Efflux from Corn Coleoptile Segments 1

    Science.gov (United States)

    Evans, Michael L.; Vesper, Mary Jo

    1980-01-01

    Conditions necessary to detect maximal auxin-induced H+ secretion using a macroelectrode have been investigated using corn coleoptile segments. Auxin-induced H+ secretion is strongly dependent upon oxygenation or aeration when the tissue to volume ratio is high. Cuticle disruption or removal is also necessary to detect substantial auxin-induced H+ secretion. The auxin-induced decrease in pH of the external medium is stronger when the hormone is applied to tissue in which the cuticle has been disrupted with an abrasive than when the hormone is applied to tissue from which the cuticle and epidermis have been removed by peeling. The lower detectable acidification of the external medium when using peeled segments appears to be due in part to the leakage of buffers into the medium and in part to the removal of the auxin-sensitive epidermal cells. The sensitivity of corn coleoptile segments to auxin, as measured by H+ secretion, increases about 2-fold during the first 2 hours after excision. This change in apparent sensitivity to auxin as reflected by H+ secretion is paralleled by a time-dependent change in the growth response to auxin. Under optimal conditions for detecting H+ efflux (oxygenation, abrasion, hormone application 2 hours after excision), the latent period in auxin-induced H+ efflux (about 7 or 8 minutes) is only half as great as the latent period in auxin-induced growth (about 18 to 20 minutes). These observations are consistent with the acid growth hypothesis of auxin action. Images PMID:16661477

  12. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    Science.gov (United States)

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.

  13. Cerebral blood volume analysis in glioblastomas using dynamic susceptibility contrast-enhanced perfusion MRI: a comparison of manual and semiautomatic segmentation methods.

    Directory of Open Access Journals (Sweden)

    Seung Chai Jung

    Full Text Available PURPOSE: To compare the reproducibilities of manual and semiautomatic segmentation method for the measurement of normalized cerebral blood volume (nCBV using dynamic susceptibility contrast-enhanced (DSC perfusion MR imaging in glioblastomas. MATERIALS AND METHODS: Twenty-two patients (11 male, 11 female; 27 tumors with histologically confirmed glioblastoma (WHO grade IV were examined with conventional MR imaging and DSC imaging at 3T before surgery or biopsy. Then nCBV (means and standard deviations in each mass was measured using two DSC MR perfusion analysis methods including manual and semiautomatic segmentation method, in which contrast-enhanced (CE-T1WI and T2WI were used as structural imaging. Intraobserver and interobserver reproducibility were assessed according to each perfusion analysis method or each structural imaging. Interclass correlation coefficient (ICC, Bland-Altman plot, and coefficient of variation (CV were used to evaluate reproducibility. RESULTS: Intraobserver reproducibilities on CE-T1WI and T2WI were ICC of 0.74-0.89 and CV of 20.39-36.83% in manual segmentation method, and ICC of 0.95-0.99 and CV of 8.53-16.19% in semiautomatic segmentation method, repectively. Interobserver reproducibilites on CE-T1WI and T2WI were ICC of 0.86-0.94 and CV of 19.67-35.15% in manual segmentation method, and ICC of 0.74-1.0 and CV of 5.48-49.38% in semiautomatic segmentation method, respectively. Bland-Altman plots showed a good correlation with ICC or CV in each method. The semiautomatic segmentation method showed higher intraobserver and interobserver reproducibilities at CE-T1WI-based study than other methods. CONCLUSION: The best reproducibility was found using the semiautomatic segmentation method based on CE-T1WI for structural imaging in the measurement of the nCBV of glioblastomas.

  14. University and student segmentation: multilevel latent-class analysis of students' attitudes towards research methods and statistics.

    Science.gov (United States)

    Mutz, Rüdiger; Daniel, Hans-Dieter

    2013-06-01

    It is often claimed that psychology students' attitudes towards research methods and statistics affect course enrollment, persistence, achievement, and course climate. However, the inter-institutional variability has been widely neglected in the research on students' attitudes towards research methods and statistics, but it is important for didactic purposes (heterogeneity of the student population). The paper presents a scale based on findings of the social psychology of attitudes (polar and emotion-based concept) in conjunction with a method for capturing beginning university students' attitudes towards research methods and statistics and identifying the proportion of students having positive attitudes at the institutional level. The study based on a re-analysis of a nationwide survey in Germany in August 2000 of all psychology students that enrolled in fall 1999/2000 (N= 1,490) and N= 44 universities. Using multilevel latent-class analysis (MLLCA), the aim was to group students in different student attitude types and at the same time to obtain university segments based on the incidences of the different student attitude types. Four student latent clusters were found that can be ranked on a bipolar attitude dimension. Membership in a cluster was predicted by age, grade point average (GPA) on school-leaving exam, and personality traits. In addition, two university segments were found: universities with an average proportion of students with positive attitudes and universities with a high proportion of students with positive attitudes (excellent segment). As psychology students make up a very heterogeneous group, the use of multiple learning activities as opposed to the classical lecture course is required. © 2011 The British Psychological Society.

  15. A novel method for measuring anterior segment area of the eye on ultrasound biomicroscopic images using photoshop.

    Directory of Open Access Journals (Sweden)

    Zhonghao Wang

    Full Text Available To describe a novel method for quantitative measurement of area parameters in ocular anterior segment ultrasound biomicroscopy (UBM images using Photoshop software and to assess its intraobserver and interobserver reproducibility.Twenty healthy volunteers with wide angles and twenty patients with narrow or closed angles were consecutively recruited. UBM images were obtained and analyzed using Photoshop software by two physicians with different-level training on two occasions. Borders of anterior segment structures including cornea, iris, lens, and zonules in the UBM image were semi-automatically defined by the Magnetic Lasso Tool in the Photoshop software according to the pixel contrast and modified by the observers. Anterior chamber area (ACA, posterior chamber area (PCA, iris cross-section area (ICA and angle recess area (ARA were drawn and measured. The intraobserver and interobserver reproducibilities of the anterior segment area parameters and scleral spur location were assessed by limits of agreement, coefficient of variation (CV, and intraclass correlation coefficient (ICC.All of the parameters were successfully measured by Photoshop. The intraobserver and interobserver reproducibilities of ACA, PCA, and ICA were good, with no more than 5% CV and more than 0.95 ICC, while the CVs of ARA were within 20%. The intraobserver and interobserver reproducibilities for defining the spur location were more than 0.97 ICCs. Although the operating times for both observers were less than 3 minutes per image, there was significant difference in the measuring time between two observers with different levels of training (p<0.001.Measurements of ocular anterior segment areas on UBM images by Photoshop showed good intraobserver and interobserver reproducibilties. The methodology was easy to adopt and effective in measuring.

  16. New methods to cope with temperature elevations in heated segments of flat plates cooled by boundary layer flow

    Directory of Open Access Journals (Sweden)

    Hajmohammadi Mohammad R.

    2016-01-01

    Full Text Available This paper documents two reliable methods to cope with the rising temperature in an array of heated segments with a known overall heat load and exposed to forced convective boundary layer flow. Minimization of the hot spots (peak temperatures in the array of heated segments constitutes the primary goal that sets the platform to develop the methods. The two proposed methods consist of: 1 Designing an array of unequal heaters so that each heater has a different size and generates heat at different rates, and 2 Distancing the unequal heaters from each other using an insulated spacing. Multi-scale design based on constructal theory is applied to estimate the optimal insulated spacing, heaters size and heat generation rates, such that the minimum hot spots temperature is achieved when subject to space constraint and fixed overall heat load. It is demonstrated that the two methods can considerably reduce the hot spot temperatures and consequently, both can be utilized with confidence in industry to achieve optimized heat transfer.

  17. Diffusion-weighted magnetic resonance imaging during radiotherapy of locally advanced cervical cancer--treatment response assessment using different segmentation methods.

    Science.gov (United States)

    Haack, Søren; Tanderup, Kari; Kallehauge, Jesper Folsted; Mohamed, Sandy Mohamed Ismail; Lindegaard, Jacob Christian; Pedersen, Erik Morre; Jespersen, Sune Nørhøj

    2015-01-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) and the derived apparent diffusion coefficient (ADC) value has potential for monitoring tumor response to radiotherapy (RT). Method used for segmentation of volumes with reduced diffusion will influence both volume size and observed distribution of ADC values. This study evaluates: 1) different segmentation methods; and 2) how they affect assessment of tumor ADC value during RT. Eleven patients with locally advanced cervical cancer underwent MRI three times during their RT: prior to start of RT (PRERT), two weeks into external beam RT (WK2RT) and one week prior to brachytherapy (PREBT). Volumes on DW-MRI were segmented using three semi-automatic segmentation methods: "cluster analysis", "relative signal intensity (SD4)" and "region growing". Segmented volumes were compared to the gross tumor volume (GTV) identified on T2-weighted MR images using the Jaccard similarity index (JSI). ADC values from segmented volumes were compared and changes of ADC values during therapy were evaluated. Significant difference between the four volumes (GTV, DWIcluster, DWISD4 and DWIregion) was found (p < 0.01), and the volumes changed significantly during treatment (p < 0.01). There was a significant difference in JSI among segmentation methods at time of PRERT (p < 0.016) with region growing having the lowest JSIGTV (mean± sd: 0.35 ± 0.1), followed by the SD4 method (mean± sd: 0.50 ± 0.1) and clustering (mean± sd: 0.52 ± 0.3). There was no significant difference in mean ADC value compared at same treatment time. Mean tumor ADC value increased significantly (p < 0.01) for all methods across treatment time. Among the three semi-automatic segmentations of hyper-intense intensities on DW-MR images implemented, cluster analysis and relative signal thresholding had the greatest similarity to the clinical tumor volume. Evaluation of mean ADC value did not depend on segmentation method.

  18. Supraciliary contraction segments: a new method for the treatment of presbyopia.

    Science.gov (United States)

    Tunc, Zeki; Helvacioglu, Firat; Ercalik, Yesim; Baikoff, George; Sencan, Sadik

    2014-02-01

    To evaluate the safety and effectiveness of supraciliary contraction segment implants (SCSIs) for the treatment of presbyopia. This prospective, non-comparative study comprised 10 eyes from five phakic and emmetropic 50-year-old subjects. Preoperative and postoperative near and distance visual acuity, topography, axial length, pachymetry, and intraocular pressure were analyzed. A 5.32-mm long and 0.85-mm thick piece of polymethyl methacrylat (PMMA) and a 5.32-mm long or 0.55-mm thick dried hydrophilic SCSI were placed within the scleral tunnels that were created 2 mm away from the limbus. The 500-550 m deep tunnels were parallel to the limbus and four segments were implanted per eye. The SCSIs were entirely placed at a depth of approximately 85% in the sclera. The uncorrected distance visual acuity was similar before and after the surgery (0.00 logMAR). The monocular mean uncorrected near visual acuity (UNVA) was 0.5 ± 0.0 before surgery, 0.12 ± 0.10 logMAR at 1 month after surgery, 0.16 ± 0.18 logMAR at 3 months after surgery, and 0.29 ± 0.16 logMAR at the 18-month follow-up. Despite obtaining satisfactory results at 6 months after the surgery, a follow-up of the SCSI intervention at 18 months revealed a regression of the early post-op UNVA improvement caused by a progressive outward movement of SCSIs.

  19. A new method for sperm characterization for infertility treatment: hypothesis testing by using combination of watershed segmentation and graph theory.

    Science.gov (United States)

    Shojaedini, Seyed Vahab; Heydari, Masoud

    2014-10-01

    Shape and movement features of sperms are important parameters for infertility study and treatment. In this article, a new method is introduced for characterization sperms in microscopic videos. In this method, first a hypothesis framework is defined to distinguish sperms from other particles in captured video. Then decision about each hypothesis is done in following steps: Selecting some primary regions as candidates for sperms by watershed-based segmentation, pruning of some false candidates during successive frames using graph theory concept and finally confirming correct sperms by using their movement trajectories. Performance of the proposed method is evaluated on real captured images belongs to semen with high density of sperms. The obtained results show the proposed method may detect 97% of sperms in presence of 5% false detections and track 91% of moving sperms. Furthermore, it can be shown that better characterization of sperms in proposed algorithm doesn't lead to extracting more false sperms compared to some present approaches.

  20. Detection of microcalcification clusters using Hessian matrix and foveal segmentation method on multiscale analysis in digital mammograms.

    Science.gov (United States)

    Thangaraju, Balakumaran; Vennila, Ila; Chinnasamy, Gowrishankar

    2012-10-01

    Mammography is the most efficient technique for detecting and diagnosing breast cancer. Clusters of microcalcifications have been mainly targeted as a reliable early sign of breast cancer and their earliest detection is essential to reduce the probability of mortality rate. Since the size of microcalcifications is very tiny and may be overlooked by the observing radiologist, we have developed a Computer Aided Diagnosis system for automatic and accurate cluster detection. A three-phased novel approach is presented in this paper. Firstly, regions of interest that corresponds to microcalcifications are identified. This can be achieved by analyzing the bandpass coefficients of the mammogram image. The suspicious regions are passed to the second phase, in which the nodular structured microcalcifications are detected based on eigenvalues of second order partial derivatives of the image and microcalcification pixels are segmented out by exploiting the foveal segmentation in multiscale analysis. Finally, by combining the responses coming out from the second order partial derivatives and the foveal method, potential microcalcifications are detected. The detection performance of the proposed method has been evaluated by using 370 mammograms. The detection method has a TP ratio of 97.76 % with 0.68 false positives per image. We have examined the performance of our computerized scheme using free-response operating characteristics curve.

  1. Estimation of ion-site association constants in ion-selective electrode membranes by modified segmented sandwich membrane method

    Energy Technology Data Exchange (ETDEWEB)

    Peshkova, Maria A.; Korobeynikov, Anton I.; Mikhelson, Konstantin N. [St. Petersburg State University, St. Petersburg (Russian Federation)

    2008-08-01

    A method aimed at potentiometric estimation of the association of ions with ion-exchanger sites and charged ionophores in ion-selective electrode membranes is proposed. The method relies on the measurements of segmented sandwich membrane potentials. It is shown theoretically that the quantification of ion association requires use of weakly associated ionic additive whose concentration in the working segment of the sandwich must be varied. This is in contrast with well-established technique of ion to neutral ionophore complexation measurements. The advantages and limitations of the novel method are critically evaluated. Association of ions in plasticized poly(vinylchloride) membranes is studied experimentally. Experimental results are provided related to the association of K{sup +}, Na{sup +}, Cs{sup +}, and NH{sub 4}{sup +}, and also Ca{sup 2+} with commonly used sites: tetra(p-Cl-phenyl)borate anion and calcium-selective lipophilic ion-exchanger bis[4-(1,1,3,3-tetramethylbutyl)phenyl]phosphate. (author)

  2. Method of single-step full parallax synthetic holographic stereogram printing based on effective perspective images' segmentation and mosaicking.

    Science.gov (United States)

    Su, Jian; Yuan, Quan; Huang, Yingqing; Jiang, Xiaoyu; Yan, Xingpeng

    2017-09-18

    With the principle of ray-tracing and the reversibility of light propagation, a new method of single-step full parallax synthetic holographic stereogram printing based on effective perspective images' segmentation and mosaicking (EPISM) is proposed. The perspective images of the scene are first sampled by a virtual camera and the exposing images, which are called synthetic effective perspective images, are achieved using the algorithm of effective perspective images' segmentation and mosaicking according to the propagation law of light and the viewing frustum effect of human eyes. The hogels are exposed using the synthetic effective perspective images in sequence to form the whole holographic stereogram. The influence of modeling parameters on the reconstructed images are also analyzed, and experimental results have demonstrated that the full parallax holographic stereogram printing with the proposed method could provide good reconstructed images by single-step printing. Moreover, detailed experiments with different holographic element sizes, different scene reconstructed distances, and different imaging planes are also analyzed and implemented.

  3. A multi-segment foot model based on anatomically registered technical coordinate systems: method repeatability in pediatric feet.

    Science.gov (United States)

    Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B

    2012-04-01

    Several multi-segment foot models to measure the motion of intrinsic joints of the foot have been reported. Use of these models in clinical decision making is limited due to lack of rigorous validation including inter-clinician, and inter-lab variability measures. A model with thoroughly quantified variability may significantly improve the confidence in the results of such foot models. This study proposes a new clinical foot model with the underlying strategy of using separate anatomic and technical marker configurations and coordinate systems. Anatomical landmark and coordinate system identification is determined during a static subject calibration. Technical markers are located at optimal sites for dynamic motion tracking. The model is comprised of the tibia and three foot segments (hindfoot, forefoot and hallux) and inter-segmental joint angles are computed in three planes. Data collection was carried out on pediatric subjects at two sites (Site 1: n=10 subjects by two clinicians and Site 2: five subjects by one clinician). A plaster mold method was used to quantify static intra-clinician and inter-clinician marker placement variability by allowing direct comparisons of marker data between sessions for each subject. Intra-clinician and inter-clinician joint angle variability were less than 4°. For dynamic walking kinematics, intra-clinician, inter-clinician and inter-laboratory variability were less than 6° for the ankle and forefoot, but slightly higher for the hallux. Inter-trial variability accounted for 2-4° of the total dynamic variability. Results indicate the proposed foot model reduces the effects of marker placement variability on computed foot kinematics during walking compared to similar measures in previous models. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-10-01

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  5. A Wavelet-Based Unified Power Quality Conditioner to Eliminate Wind Turbine Non-Ideality Consequences on Grid-Connected Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Bijan Rahmani

    2016-05-01

    Full Text Available The integration of renewable power sources with power grids presents many challenges, such as synchronization with the grid, power quality problems and so on. The shunt active power filter (SAPF can be a solution to address the issue while suppressing the grid-end current harmonics and distortions. Nonetheless, available SAPFs work somewhat unpredictably in practice. This is attributed to the dependency of the SAPF controller on nonlinear complicated equations and two distorted variables, such as load current and voltage, to produce the current reference. This condition will worsen when the plant includes wind turbines which inherently produce 3rd, 5th, 7th and 11th voltage harmonics. Moreover, the inability of the typical phase locked loop (PLL used to synchronize the SAPF reference with the power grid also disrupts SAPF operation. This paper proposes an improved synchronous reference frame (SRF which is equipped with a wavelet-based PLL to control the SAPF, using one variable such as load current. Firstly the fundamental positive sequence of the source voltage, obtained using a wavelet, is used as the input signal of the PLL through an orthogonal signal generator process. Then, the generated orthogonal signals are applied through the SRF-based compensation algorithm to synchronize the SAPF’s reference with power grid. To further force the remained uncompensated grid current harmonics to pass through the SAPF, an improved series filter (SF equipped with a current harmonic suppression loop is proposed. Concurrent operation of the improved SAPF and SF is coordinated through a unified power quality conditioner (UPQC. The DC-link capacitor of the proposed UPQC, used to interconnect a photovoltaic (PV system to the power grid, is regulated by an adaptive controller. Matlab/Simulink results confirm that the proposed wavelet-based UPQC results in purely sinusoidal grid-end currents with total harmonic distortion (THD = 1.29%, which leads to high

  6. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  7. A novel method of protein secondary structure prediction with high segment overlap measure: support vector machine approach.

    Science.gov (United States)

    Hua, S; Sun, Z

    2001-04-27

    We have introduced a new method of protein secondary structure prediction which is based on the theory of support vector machine (SVM). SVM represents a new approach to supervised pattern classification which has been successfully applied to a wide range of pattern recognition problems, including object recognition, speaker identification, gene function prediction with microarray expression profile, etc. In these cases, the performance of SVM either matches or is significantly better than that of traditional machine learning approaches, including neural networks.The first use of the SVM approach to predict protein secondary structure is described here. Unlike the previous studies, we first constructed several binary classifiers, then assembled a tertiary classifier for three secondary structure states (helix, sheet and coil) based on these binary classifiers. The SVM method achieved a good performance of segment overlap accuracy SOV=76.2 % through sevenfold cross validation on a database of 513 non-homologous protein chains with multiple sequence alignments, which out-performs existing methods. Meanwhile three-state overall per-residue accuracy Q(3) achieved 73.5 %, which is at least comparable to existing single prediction methods. Furthermore a useful "reliability index" for the predictions was developed. In addition, SVM has many attractive features, including effective avoidance of overfitting, the ability to handle large feature spaces, information condensing of the given data set, etc. The SVM method is conveniently applied to many other pattern classification tasks in biology. Copyright 2001 Academic Press.

  8. Digital images segmentation: a state of art of the different methods ...

    African Journals Online (AJOL)

    An image is a planar representation of a scene or a 3 D object. The primary information associated to each point of the image is transcribed in grey level or in colour. Image analysis is the set of methods which permits the extraction of pertinent information from the image according to the concerned application, to treat them ...

  9. 77 FR 21574 - Prospective Grant of Exclusive License: Method for Segmenting Medical Images and Detecting...

    Science.gov (United States)

    2012-04-10

    ... HUMAN SERVICES National Institutes of Health Prospective Grant of Exclusive License: Method for...), Department of Health and Human Services, is contemplating the grant of an exclusive license to practice the... curvature characteristics of anatomy to curvature characteristics anomalies. The anomalies in the image can...

  10. A density based segmentation method to determine the coordination number of a particulate system

    NARCIS (Netherlands)

    Nguyen, Thanh T.; Tran, Thanh N.; Willemsz, Tofan A.; Frijlink, Henderik W.; Ervasti, Tuomas; Ketolainen, Jarkko; Maarschalk, Kees van der Voort

    2011-01-01

    The coordination number is an important parameter for understanding the particulate systems, especially when agglomerated particles are present. However, experimental determination of the coordination number is not trivial. In this study, we describe a 3D classification method, which is based on the

  11. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina, E-mail: despina.kontos@uphs.upenn.edu [Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2013-12-15

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a

  12. Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods

    Energy Technology Data Exchange (ETDEWEB)

    Deschamps, T; Schwartz, P; Trebotich, D; Colella, P; Saloner, D; Malladi, R

    2004-12-09

    In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured mesh that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.

  13. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation

    Science.gov (United States)

    Qian, Xiangfei; Ye, Cang

    2015-01-01

    This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605

  14. a Numerical Comparison of Langrange and Kane's Methods of AN Arm Segment

    Science.gov (United States)

    Rambely, Azmin Sham; Halim, Norhafiza Ab.; Ahmad, Rokiah Rozita

    A 2-D model of a two-link kinematic chain is developed using two dynamics equations of motion, namely Kane's and Lagrange Methods. The dynamics equations are reduced to first order differential equation and solved using modified Euler and fourth order Runge Kutta to approximate the shoulder and elbow joint angles during a smash performance in badminton. Results showed that Runge-Kutta produced a better and exact approximation than that of modified Euler and both dynamic equations produced better absolute errors.

  15. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    Science.gov (United States)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  16. Performance of five research-domain automated WM lesion segmentation methods in a multi-center MS study

    DEFF Research Database (Denmark)

    de Sitter, Alexandra; Steenwijk, Martijn D; Ruet, Aurélie

    2017-01-01

    (Lesion-TOADS); and k-Nearest Neighbor with Tissue Type Priors (kNN-TTP). Main software parameters were optimized using a training set (N = 18), and formal testing was performed on the remaining patients (N = 52). To evaluate volumetric agreement with the reference segmentations, intraclass correlation......-one-center-out design to exclude the center of interest from the training phase to evaluate the performance of the method on 'unseen' center. RESULTS: Compared to the reference mean lesion volume (4.85 ± 7.29 mL), the methods displayed a mean difference of 1.60 ± 4.83 (Cascade), 2.31 ± 7.66 (LGA), 0.44 ± 4.68 (LPA), 1.......17) or LGA (SI = 0.31 ± 0.23). All methods showed highly similar results when used on data from a center not used in software parameter optimization. CONCLUSION: The performance of the methods in this multi-center MS dataset was moderate, but appeared to be robust even with new datasets from centers...

  17. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    Science.gov (United States)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  18. Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI

    Science.gov (United States)

    Dangi, Shusil; Linte, Cristian A.

    2017-03-01

    Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  19. Variational Methods for Discontinuous Structures : Applications to Image Segmentation, Continuum Mechanics

    CERN Document Server

    Tomarelli, Franco

    1996-01-01

    In recent years many researchers in material science have focused their attention on the study of composite materials, equilibrium of crystals and crack distribution in continua subject to loads. At the same time several new issues in computer vision and image processing have been studied in depth. The understanding of many of these problems has made significant progress thanks to new methods developed in calculus of variations, geometric measure theory and partial differential equations. In particular, new technical tools have been introduced and successfully applied. For example, in order to describe the geometrical complexity of unknown patterns, a new class of problems in calculus of variations has been introduced together with a suitable functional setting: the free-discontinuity problems and the special BV and BH functions. The conference held at Villa Olmo on Lake Como in September 1994 spawned successful discussion of these topics among mathematicians, experts in computer science and material scientis...

  20. A fast alignment method for breast MRI follow-up studies using automated breast segmentation and current-prior registration

    Science.gov (United States)

    Wang, Lei; Strehlow, Jan; Rühaak, Jan; Weiler, Florian; Diez, Yago; Gubern-Merida, Albert; Diekmann, Susanne; Laue, Hendrik; Hahn, Horst K.

    2015-03-01

    In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process. In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.

  1. A method for automatic segmentation and splitting of hyperspectral images of raspberry plants collected in field conditions

    Directory of Open Access Journals (Sweden)

    Dominic Williams

    2017-11-01

    Full Text Available Abstract Hyperspectral imaging is a technology that can be used to monitor plant responses to stress. Hyperspectral images have a full spectrum for each pixel in the image, 400–2500 nm in this case, giving detailed information about the spectral reflectance of the plant. Although this technology has been used in laboratory-based controlled lighting conditions for early detection of plant disease, the transfer of such technology to imaging plants in field conditions presents a number of challenges. These include problems caused by varying light levels and difficulties of separating the target plant from its background. Here we present an automated method that has been developed to segment raspberry plants from the background using a selected spectral ratio combined with edge detection. Graph theory was used to minimise a cost function to detect the continuous boundary between uninteresting plants and the area of interest. The method includes automatic detection of a known reflectance tile which was kept constantly within the field of view for all image scans. A method to split images containing rows of multiple raspberry plants into individual plants was also developed. Validation was carried out by comparison of plant height and density measurements with manually scored values. A reasonable correlation was found between these manual scores and measurements taken from the images (r2 = 0.75 for plant height. These preliminary steps are an essential requirement before detailed spectral analysis of the plants can be achieved.

  2. Combination of the Level-Set Methods with the Contourlet Transform for the Segmentation of the IVUS Images

    Directory of Open Access Journals (Sweden)

    Hassen Lazrag

    2012-01-01

    Full Text Available Intravascular ultrasound (IVUS imaging is a catheter-based medical methodology establishing itself as a useful modality for studying atherosclerosis. The detection of lumen and media-adventitia boundaries in IVUS images constitutes an essential step towards the reliable quantitative diagnosis of atherosclerosis. In this paper, a novel scheme is proposed to automatically detect lumen and media-adventitia borders. This segmentation method is based on the level-set model and the contourlet multiresolution analysis. The contourlet transform decomposes the original image into low-pass components and band-pass directional bands. The circular hough transform (CHT is adopted in low-pass bands to yield the initial lumen and media-adventitia contours. The anisotropic diffusion filtering is then used in band-pass subbands to suppress noise and preserve arterial edges. Finally, the curve evolution in the level-set functions is used to obtain final contours. The proposed method is experimentally evaluated via 20 simulated images and 30 real images from human coronary arteries. It is demonstrated that the mean distance error and the relative mean distance error have increased by 5.30 pixels and 7.45%, respectively, as compared with those of a recently traditional level-set model. These results reveal that the proposed method can automatically and accurately extract two vascular boundaries.

  3. Effectiveness of different correction methods of pyeloureteral segment according to the data of diuretic ultrasonography

    Directory of Open Access Journals (Sweden)

    D. Z. Vorobets

    2015-08-01

    Full Text Available Methods of estimation of effectiveness of the open and laparoscopic pyeloplasty, as well as endo-urological palliative methods – laser resection, balloon dilatation and endopyelotomy, which determine the anatomical and functional peculiarities of renal pelvis and pyelo-ureteral junction with the help of ultrasound diagnostics during the forced diuresis, have been proposed. Changes of the area of renal pelvis, the velocity of post-furosemide increase of the scope of renal pelvis, rate of its drainage, changes in the diameter of pyeloureteral junction have been studied. This methodical approach is non-invasive, informative and simple in application. It is shown that dispersions of samples of patients after the open surgery do not differ from the dispersions of samples of the same patients before the operation on such parameters as areas of renal pelvis before the induction of furosemide, areas of renal pelvis after 15 minutes administration of furosemide, the rate of drainage after furosemide, the original diameter of pyeloureteral junction. This may indicate the stability of surgery results. For example, the larger renal pelvis by kidney size before the operation corresponded to the larger designed pelvis after the operation; renal pelvis drained faster before the operation, features faster drainage after the operation as well. Variation in the areas of renal pelvis which decreased in 40 minutes after furosemide, percent rate of longitudinal pelvis area, rate of after-furosemide increase in pelvis area, diameter of pyeloureteral junction in 15 minutes administration of furosemide after the open pyeloplasty was significantly different compared to the variation in the same parameter for the same patients before the operations. More substantial difference was observed in the same patients before and after Anderson-Hynes surgery by parameters of relative rate of after-furosemide drainage of pelvis and increase in diameter of pyeloureteral junction

  4. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy

    Science.gov (United States)

    Yang, Xiaofeng; Rossi, Peter; Ogunleye, Tomi; Marcus, David M.; Jani, Ashesh B.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2014-01-01

    Purpose: The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. Methods: The authors’ approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1–3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS–CT image fusion. After TRUS–CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. Results: For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors’ approach and the MRI-based volume was 7.28% ± 0

  5. A level-set method for pathology segmentation in fluorescein angiograms and en face retinal images of patients with age-related macular degeneration

    Science.gov (United States)

    Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz

    2013-03-01

    The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.

  6. Segmental neurofibromatosis

    OpenAIRE

    Galhotra, Virat; Sheikh, Soheyl; Jindal, Sanjeev; Singla, Anshu

    2014-01-01

    Segmental neurofibromatosis is a rare disorder, characterized by neurofibromas or cafι-au-lait macules limited to one region of the body. Its occurrence on the face is extremely rare and only few cases of segmental neurofibromatosis over the face have been described so far. We present a case of segmental neurofibromatosis involving the buccal mucosa, tongue, cheek, ear, and neck on the right side of the face.

  7. A new osteophyte segmentation method with applications to an anterior cruciate ligament transection rabbit femur model via micro-CT imaging

    Science.gov (United States)

    Liang, G.; Elkins, J. M.; Coimbra, A.; Duong, L. T.; Williams, D. S.; Sonka, M.; Saha, P. K.

    2010-03-01

    Osteophyte is an additional bony growth on a normal bone surface limiting or stopping motion in a deteriorating joint. Detection and quantification of osteophytes from CT images is helpful in assessing disease status as well as treatment and surgery planning. However, it is difficult to segment osteophytes from healthy bones using simple thresholding or edge/texture features in CT imaging. Here, we present a new method, based on active shape model (ASM), to solve this problem and evaluate its application to ex vivo μCT images in an ACLT rabbit femur model. The common idea behind most ASM based segmentation methods is to first build a parametric shape model from a training dataset and during application, find a shape instance from the model that optimally fits to target image. However, it poses a fundamental difficulty for the current application because a diseased bone shape is significantly altered at regions with osteophyte deposition misguiding an ASM method that eventually leads to suboptimum segmentation results. Here, we introduce a new partial ASM method that uses bone shape over healthy regions and extrapolate its shape over diseased region following the underlying shape model. Once the healthy bone region is detected, osteophyte is segmented by subtracting partial-ASM derived shape from the overall diseased shape. Also, a new semi-automatic method is presented in this paper for efficiently building a 3D shape model for rabbit femur. The method has been applied to μCT images of 2-, 4-, and 8-week post ACLT and sham-treated rabbit femurs and results of reproducibility and sensitivity analyses of the new osteophyte segmentation method are presented.

  8. The segmented arch approach: a method for orthodontic treatment of a severe Class III open-bite malocclusion.

    Science.gov (United States)

    Espinar-Escalona, Eduardo; Barrera-Mora, José María; Llamas-Carreras, José María; Ruiz-Navarro, María Belén

    2013-02-01

    An open bite is a common malocclusion, and it is generally associated with several linked etiologic factors. When establishing the treatment plan, it is essential to consider every aspect of the various etiologic causes and their evolution; this will help to correct it. This article reports the case of a girl aged 10.7 years with a skeletal Class III malocclusion and an open bite. The treatment mechanics were based on compensatory dental changes performed to close the bite and correct the skeletal Class III malocclusion. The patient had a deep maxillary deficiency, and the lower facial third was severely enlarged. In this article, we aimed to describe a simple mechanical approach that will close the bite through changes in the occlusal plane (segmentation of arches). It is an extremely simple method that is easily tolerated by the patient. It not only closes the bite effectively but also helps to correct the unilateral or bilateral lack of occlusal interdigitation between the dental arches. A Class III patient with an anterior open bite is shown in this article to illustrate the effectiveness of these treatment mechanics. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  9. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    Science.gov (United States)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  10. Semiautomatic methods for segmentation of the proliferative tumour volume on sequential FLT PET/CT images in head and neck carcinomas and their relation to clinical outcome

    NARCIS (Netherlands)

    Arens, A.I.J.; Troost, E.G.C.; Hoeben, B.A.W.; Grootjans, W.; Lee, J.A.; Gregoire, V.; Hatt, M.; Visvikis, D.; Bussink, J.; Oyen, W.J.G.; Kaanders, J.H.A.M.; Visser, E.P.

    2014-01-01

    PURPOSE: Radiotherapy of head and neck cancer induces changes in tumour cell proliferation during treatment, which can be depicted by the PET tracer (18)F-fluorothymidine (FLT). In this study, three advanced semiautomatic PET segmentation methods for delineation of the proliferative tumour volume

  11. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    ), the classical reconstruction methods suffer from their inability to handle limited and/ or corrupted data. Form any analysis tasks computationally demanding segmentation methods are used to automatically segment an object, after using a simple reconstruction method as a first step. In the literature, methods...... such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction...... problem. The tests showed a clear improvement for realistic materials simulations and that the one-stage method was clearly more robust toward noise. The noise-robustness result could be a step toward making this method more applicable for lab-scale experiments. We have introduced a segmentation...

  12. Wavelet-Based Peak Detection and a New Charge Inference Procedure for MS/MS Implemented in ProteoWizard’s msConvert

    Science.gov (United States)

    2015-01-01

    We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1–100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets. PMID:25411686

  13. An empirical method for automatic determination of maximum number of segments in DMPO-based IMRT for Head and Neck cases.

    Science.gov (United States)

    Ranganathan, Vaitheeswaran; Maria Das, K Joseph

    2016-01-01

    An empirical scheme called "anatomy-guided segment counting (AGSC)" is proposed for automatic selection of maximum number of segments (NOS) for direct machine parameter optimization (DMPO). Direct machine parameter optimization (DMPO) requires the user to define the maximum number of segments (NOS) in order to proceed with an optimization process. Till date there is no established approach to arrive at an optimal and case-specific maximum NOS in DMPO, and this step is largely left to the planner's experience. The AGSC scheme basically uses the Beam's-eye views (BEVs) and other planning parameters to decide on appropriate number of segments for the beam. The proposed algorithm was tested in eight H&N cases. We used Auto Plan feature available in Pinnacle3 (version 9.10.0) for driving the DMPO optimization. There is about 13% reduction in the composite objective value in AGSC plans as compared to the plans employing 6 NOS per beam and 10% increase in the composite objective value in AGSC plans as compared to the plans employing 8 NOS per beam. On the delivery efficiency front, there is about 10% increase in NOS in AGSC plans as compared to the plans employing 6 NOS per beam specification. Similarly, there is about 19% reduction in NOS in AGSC plans as compared to the plans employing 8 NOS per beam specification. The study demonstrates that the AGSC method allows specifying appropriate number of segments into the DMPO module accounting for the complexity of a given case.

  14. Segmental Neurofibromatosis

    Directory of Open Access Journals (Sweden)

    Yesudian Devakar

    1997-01-01

    Full Text Available Segmental neurofibromatosis is a rare variant of neurofibromatosis in which the lesions are confined to one segment or dermatome of the body. They resemble classical neurofibromas in their morphology, histopathology and electron microscopy. However, systemic associations are usually absent. We report one such case with these classical features.

  15. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy.

    Science.gov (United States)

    Zhou, Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J

    2010-03-01

    In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation-and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. The ACRASM segmentation algorithm was compared to the original active shape mode (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume

  16. SU-F-J-27: Segmentation of Prostate CBCT Images with Implanted Calypso Transponders Using Double Haar Wavelet Transform

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y [Shandong Communication and Media College, Jinan, Shandong (China); Saleh, Z; Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States); Song, Y; Obcemea, C [Memorial Sloan-Kettering Cancer Center, Sleepy Hollow, NY (United States); Chan, M [Memorial Sloan-Kettering Cancer Center, Basking Ridge, NJ (United States); Li, X [Memorial Sloan Kettering Cancer Center, Rockville Centre, NY (United States); Happersett, L [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Qian, X [North Shore Long Island Jewish health System, North New Hyde Park, NY (United States)

    2016-06-15

    Purpose: Segmentation of prostate CBCT images is an essential step towards real-time adaptive radiotherapy. It is challenging For Calypso patients, as more artifacts are generated by the beacon transponders. We herein propose a novel wavelet-based segmentation algorithm for rectum, bladder, and prostate of CBCT images with implanted Calypso transponders. Methods: Five hypofractionated prostate patients with daily CBCT were studied. Each patient had 3 Calypso transponder beacons implanted, and the patients were setup and treated with Calypso tracking system. Two sets of CBCT images from each patient were studied. The structures (i.e. rectum, bladder, and prostate) were contoured by a trained expert, and these served as ground truth. For a given CBCT, the moving window-based Double Haar transformation is applied first to obtain the wavelet coefficients. Based on a user defined point in the object of interest, a cluster algorithm based adaptive thresholding is applied to the low frequency components of the wavelet coefficients, and a Lee filter theory based adaptive thresholding is applied to the high frequency components. For the next step, the wavelet reconstruction is applied to the thresholded wavelet coefficients. A binary/segmented image of the object of interest is therefore obtained. DICE, sensitivity, inclusiveness and ΔV were used to evaluate the segmentation result. Results: Considering all patients, the bladder has the DICE, sensitivity, inclusiveness, and ΔV ranges of [0.81–0.95], [0.76–0.99], [0.83–0.94], [0.02–0.21]. For prostate, the ranges are [0.77–0.93], [0.84–0.97], [0.68–0.92], [0.1–0.46]. For rectum, the ranges are [0.72–0.93], [0.57–0.99], [0.73–0.98], [0.03–0.42]. Conclusion: The proposed algorithm appeared effective segmenting prostate CBCT images with the present of the Calypso artifacts. However, it is not robust in two scenarios: 1) rectum with significant amount of gas; 2) prostate with very low contrast. Model

  17. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. 3D dento-maxillary osteolytic lesion and active contour segmentation pilot study in CBCT: semi-automatic vs manual methods.

    Science.gov (United States)

    Vallaeys, K; Kacem, A; Legoux, H; Le Tenier, M; Hamitouche, C; Arbab-Chirani, R

    2015-01-01

    This study was designed to evaluate the reliability of a semi-automatic segmentation tool for dento-maxillary osteolytic image analysis compared with manually defined segmentation in CBCT scans. Five CBCT scans were selected from patients for whom periapical radiolucency images were available. All images were obtained using a ProMax® 3D Mid Planmeca (Planmeca Oy, Helsinki, Finland) and were acquired with 200-μm voxel size. Two clinicians performed the manual segmentations. Four operators applied three different semi-automatic procedures. The volumes of the lesions were measured. An analysis of dispersion was made for each procedure and each case. An ANOVA was used to evaluate the operator effect. Non-paired t-tests were used to compare semi-automatic procedures with the manual procedure. Statistical significance was set at α = 0.01. The coefficients of variation for the manual procedure were 2.5-3.5% on average. There was no statistical difference between the two operators. The results of manual procedures can be used as a reference. For the semi-automatic procedures, the dispersion around the mean can be elevated depending on the operator and case. ANOVA revealed significant differences between the operators for the three techniques according to cases. Region-based segmentation was only comparable with the manual procedure for delineating a circumscribed osteolytic dento-maxillary lesion. The semi-automatic segmentations tested are interesting but are limited to complex surface structures. A methodology that combines the strengths of both methods could be of interest and should be tested. The improvement in the image analysis that is possible through the segmentation procedure and CBCT image quality could be of value.

  19. An automated image-based method of 3D subject-specific body segment parameter estimation for kinetic analyses of rapid movements.

    Science.gov (United States)

    Sheets, Alison L; Corazza, Stefano; Andriacchi, Thomas P

    2010-01-01

    Accurate subject-specific body segment parameters (BSPs) are necessary to perform kinetic analyses of human movements with large accelerations, or no external contact forces or moments. A new automated topographical image-based method of estimating segment mass, center of mass (CM) position, and moments of inertia is presented. Body geometry and volume were measured using a laser scanner, then an automated pose and shape registration algorithm segmented the scanned body surface, and identified joint center (JC) positions. Assuming the constant segment densities of Dempster, thigh and shank masses, CM locations, and moments of inertia were estimated for four male subjects with body mass indexes (BMIs) of 19.7-38.2. The subject-specific BSP were compared with those determined using Dempster and Clauser regression equations. The influence of BSP and BMI differences on knee and hip net forces and moments during a running swing phase were quantified for the subjects with the smallest and largest BMIs. Subject-specific BSP for 15 body segments were quickly calculated using the image-based method, and total subject masses were overestimated by 1.7-2.9%.When compared with the Dempster and Clauser methods, image-based and regression estimated thigh BSP varied more than the shank parameters. Thigh masses and hip JC to thigh CM distances were consistently larger, and each transverse moment of inertia was smaller using the image-based method. Because the shank had larger linear and angular accelerations than the thigh during the running swing phase, shank BSP differences had a larger effect on calculated intersegmental forces and moments at the knee joint than thigh BSP differences did at the hip. It was the net knee kinetic differences caused by the shank BSP differences that were the largest contributors to the hip variations. Finally, BSP differences produced larger kinetic differences for the subject with larger segment masses, suggesting that parameter accuracy is more

  20. Real-Time Wavelet-Based Coordinated Control of Hybrid Energy Storage Systems for Denoising and Flattening Wind Power Output

    Directory of Open Access Journals (Sweden)

    Tran Thai Trung

    2014-10-01

    Full Text Available Since the penetration level of wind energy is continuously increasing, the negative impact caused by the fluctuation of wind power output needs to be carefully managed. This paper proposes a novel real-time coordinated control algorithm based on a wavelet transform to mitigate both short-term and long-term fluctuations by using a hybrid energy storage system (HESS. The short-term fluctuation is eliminated by using an electric double-layer capacitor (EDLC, while the wind-HESS system output is kept constant during each 10-min period by a Ni-MH battery (NB. State-of-charge (SOC control strategies for both EDLC and NB are proposed to maintain the SOC level of storage within safe operating limits. A ramp rate limitation (RRL requirement is also considered in the proposed algorithm. The effectiveness of the proposed algorithm has been tested by using real time simulation. The simulation model of the wind-HESS system is developed in the real-time digital simulator (RTDS/RSCAD environment. The proposed algorithm is also implemented as a user defined model of the RSCAD. The simulation results demonstrate that the HESS with the proposed control algorithm can indeed assist in dealing with the variation of wind power generation. Moreover, the proposed method shows better performance in smoothing out the fluctuation and managing the SOC of battery and EDLC than the simple moving average (SMA based method.

  1. An efficient and high fidelity method for amplification, cloning and sequencing of complete tospovirus genomic RNA segments

    Science.gov (United States)

    Amplification and sequencing of the complete M- and S-RNA segments of Tomato spotted wilt virus and Impatiens necrotic spot virus as a single fragment is useful for whole genome sequencing of tospoviruses co-infecting a single host plant. It avoids issues associated with overlapping amplicon-based ...

  2. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  3. Motor imagery EEG classification with optimal subset of wavelet based common spatial pattern and kernel extreme learning machine.

    Science.gov (United States)

    Hyeong-Jun Park; Jongin Kim; Beomjun Min; Boreom Lee

    2017-07-01

    Performance of motor imagery based brain-computer interfaces (MI BCIs) greatly depends on how to extract the features. Various versions of filter-bank based common spatial pattern have been proposed and used in MI BCIs. Filter-bank based common spatial pattern has more number of features compared with original common spatial pattern. As the number of features increases, the MI BCIs using filter-bank based common spatial pattern can face overfitting problems. In this study, we used eigenvector centrality feature selection method, wavelet packet decomposition common spatial pattern, and kernel extreme learning machine to improve the performance of MI BCIs and avoid overfitting problems. Furthermore, the computational speed was improved by using kernel extreme learning machine.

  4. Smartphone Imaging in Ophthalmology: A Comparison with Traditional Methods on the Reproducibility and Usability for Anterior Segment Imaging.

    Science.gov (United States)

    Chen, David Zy; Tan, Clement Wt

    2016-01-01

    This study aimed to determine the reproducibility and usability of anterior segment images taken from a smartphone stabilised on a slit-lamp with those taken from a custom-mounted slit-lamp camera. This was a prospective, single- blind comparative digital imaging validation study. Digital photographs of patients with cataract were taken using a smartphone camera (an iPhone 5) on a telescopic mount and a Canon EOS 10D anterior segment camera. Images were graded and compared according to the Lens Opacification Classification System III (LOCS III). A total of 440 anterior segment images were graded independently by 2 ophthalmologists, 2 residents and 2 medical students. Intraclass correlation (ICC) between the iPhone and anterior segment camera images were fair for nuclear opalescence (NO) and nuclear colour (NC), and excellent for cortical (C) and posterior subcapsular (PSC) (NO: ICC 0.40, 95% CI, 0.16 to 0.57; NC: ICC 0.47, 95% CI, 0.16 to 0.66; C: ICC 0.76, 95% CI, 0.71 to 0.81; PSC: ICC 0.81, 95% CI, 0.76 to 0.85). There was no difference in grader impression of confidence and images usability between both cameras (P = 0.66 and P = 0.58, respectively). Anterior segment images taken from an iPhone have good reproducibility for retro-illuminated images, but fair reproducibility for NO and NC under low light settings. There were no differences in grader confidence and subjective image suitability.

  5. SU-F-J-112: Clinical Feasibility Test of An RF Pulse-Based MRI Method for the Quantitative Fat-Water Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S; Wloch, J; Pirkola, M [William Beaumont Hospital, Royal Oak, MI (United States)

    2016-06-15

    Purpose: Quantitative fat-water segmentation is important not only because of the clinical utility of fat-suppressed MRI images in better detecting lesions of clinical significance (in the midst of bright fat signal) but also because of the possible physical need, in which CT-like images based on the materials’ photon attenuation properties may have to be generated from MR images; particularly, as in the case of MR-only radiation oncology environment to obtain radiation dose calculation or as in the case of hybrid PET/MR modality to obtain attenuation correction map for the quantitative PET reconstruction. The majority of such fat-water quantitative segmentations have been performed by utilizing the Dixon’s method and its variations, which have to enforce the proper settings (often predefined) of echo time (TE) in the pulse sequences. Therefore, such methods have been unable to be directly combined with those ultrashort TE (UTE) sequences that, taking the advantage of very low TE values (∼ 10’s microsecond), might be beneficial to directly detect bones. Recently, an RF pulse-based method (http://dx.doi.org/10.1016/j.mri.2015.11.006), termed as PROD pulse method, was introduced as a method of quantitative fat-water segmentation that does not have to depend on predefined TE settings. Here, the clinical feasibility of this method is verified in brain tumor patients by combining the PROD pulse with several sequences. Methods: In a clinical 3T MRI, the PROD pulse was combined with turbo spin echo (e.g. TR=1500, TE=16 or 60, ETL=15) or turbo field echo (e.g. TR=5.6, TE=2.8, ETL=12) sequences without specifying TE values. Results: The fat-water segmentation was possible without having to set specific TE values. Conclusion: The PROD pulse method is clinically feasible. Although not yet combined with UTE sequences in our laboratory, the method is potentially compatible with UTE sequences, and thus, might be useful to directly segment fat, water, bone and air.

  6. On the theory of the proton dipolar-correlation effect as a method for investigation of segmental displacement in polymer melts

    Science.gov (United States)

    Lozovoi, A.; Petrova, L.; Mattea, C.; Stapf, S.; Rössler, E. A.; Fatkullin, N.

    2017-08-01

    A thorough theoretical description of the recently suggested method [A. Lozovoi et al. J. Chem. Phys. 144, 241101 (2016)] based on the proton NMR dipolar-correlation effect allowing for the investigation of segmental diffusion in polymer melts is presented. It is shown that the initial rise of the proton dipolar-correlation build-up function, constructed from Hahn Echo signals measured at times t and t/2, contains additive contributions from both inter- and intramolecular magnetic dipole-dipole interactions. The intermolecular contribution depends on the relative mean-squared displacement of polymer segments from different macromolecules, which provides an opportunity for an experimental study of segmental translational motions at the millisecond range that falls outside the typical range accessible by other methods, i.e., neutron scattering or NMR spin echo with the magnetic field gradients. A comparison with the other two proton NMR methods based on transverse spin relaxation phenomena, i.e., solid echo and double quantum resonance, shows that the initial rise of the build-up functions in all the discussed methods is essentially identical and differs only in numerical coefficients. In addition, it is argued that correlation functions constructed in the same manner as the dipolar-correlation build-up function can be applied for an experimental determination of a mean relaxation rate in the case of systems possessing multi-exponential magnetization decay.

  7. Optimizing multi-resolution segmentation scale using empirical methods: Exploring the sensitivity of the supervised discrepancy measure Euclidean distance 2 (ED2)

    Science.gov (United States)

    Witharana, Chandi; Civco, Daniel L.

    2014-01-01

    Multiresolution segmentation (MRS) has proven to be one of the most successful image segmentation algorithms in the geographic object-based image analysis (GEOBIA) framework. This algorithm is relatively complex and user-dependent; scale, shape, and compactness are the main parameters available to users for controlling the algorithm. Plurality of segmentation results is common because each parameter may take a range of values within its parameter space or different combinations of values among parameters. Finding optimal parameter values through a trial-and-error process is commonly practiced at the expense of time and labor, thus, several alternative supervised and unsupervised methods for supervised automatic parameter setting have been proposed and tested. In the case of supervised empirical assessments, discrepancy measures are employed for computing measures of dissimilarity between a reference polygon and an image object candidate. Evidently the reliability of the optimal-parameter prediction heavily relies on the sensitivity of the segmentation quality metric. The idea behind pursuing optimal parameter setting is that, for instance, a given scale setting provides image object candidates different from the other scale setting; thus, by design the supervised quality metric should capture this difference. In this exploratory study, we selected the Euclidean distance 2 (ED2) metric, a recently proposed supervised metric, whose main design goal is to optimize the geometrical discrepancy (potential segmentation error (PSE)) and arithmetic discrepancy between image objects and reference polygons (number-of segmentation ratio (NSR)) in two dimensional Euclidean space, as a candidate to investigate the validity and efficacy of empirical discrepancy measures for finding the optimal scale parameter setting of the MRS algorithm. We chose test image scenes from four different space-borne sensors with varying spatial resolutions and scene contents and systematically

  8. Mobile healthcare for automatic driving sleep-onset detection using wavelet-based EEG and respiration signals.

    Science.gov (United States)

    Lee, Boon-Giin; Lee, Boon-Leng; Chung, Wan-Young

    2014-09-26

    Driving drowsiness is a major cause of traffic accidents worldwide and has drawn the attention of researchers in recent decades. This paper presents an application for in-vehicle non-intrusive mobile-device-based automatic detection of driver sleep-onset in real time. The proposed application classifies the driving mental fatigue condition by analyzing the electroencephalogram (EEG) and respiration signals of a driver in the time and frequency domains. Our concept is heavily reliant on mobile technology, particularly remote physiological monitoring using Bluetooth. Respiratory events are gathered, and eight-channel EEG readings are captured from the frontal, central, and parietal (Fpz-Cz, Pz-Oz) regions. EEGs are preprocessed with a Butterworth bandpass filter, and features are subsequently extracted from the filtered EEG signals by employing the wavelet-packet-transform (WPT) method to categorize the signals into four frequency bands: α, β, θ, and δ. A mutual information (MI) technique selects the most descriptive features for further classification. The reduction in the number of prominent features improves the sleep-onset classification speed in the support vector machine (SVM) and results in a high sleep-onset recognition rate. Test results reveal that the combined use of the EEG and respiration signals results in 98.6% recognition accuracy. Our proposed application explores the possibility of processing long-term multi-channel signals.

  9. Mobile Healthcare for Automatic Driving Sleep-Onset Detection Using Wavelet-Based EEG and Respiration Signals

    Directory of Open Access Journals (Sweden)

    Boon-Giin Lee

    2014-09-01

    Full Text Available Driving drowsiness is a major cause of traffic accidents worldwide and has drawn the attention of researchers in recent decades. This paper presents an application for in-vehicle non-intrusive mobile-device-based automatic detection of driver sleep-onset in real time. The proposed application classifies the driving mental fatigue condition by analyzing the electroencephalogram (EEG and respiration signals of a driver in the time and frequency domains. Our concept is heavily reliant on mobile technology, particularly remote physiological monitoring using Bluetooth. Respiratory events are gathered, and eight-channel EEG readings are captured from the frontal, central, and parietal (Fpz-Cz, Pz-Oz regions. EEGs are preprocessed with a Butterworth bandpass filter, and features are subsequently extracted from the filtered EEG signals by employing the wavelet-packet-transform (WPT method to categorize the signals into four frequency bands: α, β, θ, and δ. A mutual information (MI technique selects the most descriptive features for further classification. The reduction in the number of prominent features improves the sleep-onset classification speed in the support vector machine (SVM and results in a high sleep-onset recognition rate. Test results reveal that the combined use of the EEG and respiration signals results in 98.6% recognition accuracy. Our proposed application explores the possibility of processing long-term multi-channel signals.

  10. Comparative Study of Wavelet-Based Unsupervised Ocular Artifact Removal Techniques for Single-Channel EEG Data.

    Science.gov (United States)

    Khatun, Saleha; Mahajan, Ruhi; Morshed, Bashir I

    2016-01-01

    Electroencephalogram (EEG) is a technique for recording the asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. Artifacts, such as eye blink activities, can corrupt these neuronal signals. While ocular artifact (OA) removal is well investigated for multiple channel EEG systems, in alignment with the recent momentum toward minimalistic EEG systems for use in natural environments, we investigate unsupervised and effective removal of OA from single-channel streaming raw EEG data. In this paper, the unsupervised wavelet transform (WT) decomposition technique was systematically evaluated for the effectiveness of OA removal for a single-channel EEG system. A set of seven raw EEG data set was analyzed. Two commonly used WT methods, Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), were applied. Four WT basis functions, namely, haar, coif3, sym3, and bior4.4, were considered for OA removal with universal threshold and statistical threshold (ST). To quantify OA removal efficacy from single-channel EEG, five performance metrics were utilized: correlation coefficients, mutual information, signal-to-artifact ratio, normalized mean square error, and time-frequency analysis. The temporal and spectral analysis shows that the optimal combination could be DWT with ST with coif3 or bior4.4 to remove OA among 16 combinations. This paper demonstrates that the WT can be an effective tool for unsupervised OA removal from single-channel EEG data for real-time applications.

  11. Double-chamber rotating bioreactor for dynamic perfusion cell seeding of large-segment tracheal allografts: comparison to conventional static methods.

    Science.gov (United States)

    Haykal, Siba; Salna, Michael; Zhou, Yingzhe; Marcus, Paula; Fatehi, Mostafa; Frost, Geoff; Machuca, Tiago; Hofer, Stefan O P; Waddell, Thomas K

    2014-08-01

    Tracheal transplantation with a long-segment recellularized tracheal allograft has previously been performed without the need for immunosuppressive therapy. Recipients' mesenchymal stromal cells (MSC) and tracheal epithelial cells (TEC) were harvested, cultured, expanded, and seeded on a donor trachea within a bioreactor. Prior techniques used for cellular seeding have involved only static-seeding methods. Here, we describe a novel bioreactor for recellularization of long-segment tracheae. Tracheae were recellularized with epithelial cells on the luminal surface and bone marrow-derived MSC on the external surface. We used dynamic perfusion seeding for both cell types and demonstrate an increase in both cellular counts and homogeneity scores compared with traditional methods. Despite these improvements, orthotopic transplantation of these scaffolds revealed no labeled cells at postoperative day 3 and lack of re-epithelialization within the first 2 weeks. The animals in this study had postoperative respiratory distress and tracheal collapse that was incompatible with life.

  12. Automatic small bowel tumor diagnosis by using multi-scale wavelet-based analysis in wireless capsule endoscopy images

    Directory of Open Access Journals (Sweden)

    Barbosa Daniel C

    2012-01-01

    Full Text Available Abstract Background Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. Method The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. Results The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice.

  13. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  14. Chan-Vese Segmentation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-08-01

    Full Text Available While many segmentation methods rely heavily in some way on edge detection, the "Active Contours Without Edges" method by Chan and Vese ignores edges completely. Instead, the method optimally fits a two-phase piecewise constant model to the given image. The segmentation boundary is represented implicitly with a level set function, which allows the segmentation to handle topological changes more easily than explicit snake methods. This article describes the level set formulation of the Chan–Vese model and its numerical solution using a semi-implicit gradient descent. We also discuss the Chan–Sandberg–Vese method, a straightforward extension of Chan–Vese for vector-valued images.

  15. Anisotropic diffusion filter based edge enhancement for the segmentation of carotid intima-media layer in ultrasound images using variational level set method without re-initialisation.

    Science.gov (United States)

    Sumathi, K; Anandh, K R; Mahesh, V; Ramakrishnan, S

    2014-01-01

    In this work an attempt has been made to enhance the edges and segment the boundary of intima-media layer of Common Carotid Artery (CCA) using anisotropic diffusion filter and level set method. Ultrasound B mode longitudinal images of normal and abnormal images of common carotid arteries are used in this study. The images are subjected to anisotropic diffusion filter to generate edge map. This edge map is used as a stopping boundary in variational level set method without re-initialisation to segment the intima-media layer. Geometric features are extracted from this layer and analyzed statistically. Results show that anisotropic diffusion filtering is able to extract the edges in both normal and abnormal images. The obtained edge maps are found to have high contrast and sharp edges. The edge based variational level set method is able to segment the intima-media layer precisely from common carotid artery. The extracted geometrical features such as major axis and extent are found to be statistically significant in differentiating normal and abnormal images. Thus this study seems to be clinically useful in diagnosis of cardiovascular disease.

  16. Validation tools for image segmentation

    Science.gov (United States)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  17. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  18. Weighted entropy for segmentation evaluation

    Science.gov (United States)

    Khan, Jesmin F.; Bhuiyan, Sharif M.

    2014-04-01

    In many image, video and computer vision systems the image segmentation is an essential part. Significant research has been done in image segmentation and a number of quantitative evaluation methods have already been proposed in the literature. However, often the segmentation evaluation is subjective that means it has been done visually or qualitatively. A segmentation evaluation method based on entropy is proposed in this work which is objective and simple to implement. A weighted self and mutual entropy are proposed to measure the dissimilarity of the pixels among the segmented regions and the similarity within a region. This evaluation technique gives a score that can be used to compare different segmentation algorithms for the same image, or to compare the segmentation results of a given algorithm with different images, or to find the best suited values of the parameters of a segmentation algorithm for a given image. The simulation results show that the proposed method can identify over-segmentation, under-segmentation, and the good segmentation.

  19. MO-F-CAMPUS-J-04: Tissue Segmentation-Based MR Electron Density Mapping Method for MR-Only Radiation Treatment Planning of Brain

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H [Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Lee, Y [Sunnybrook Odette Cancer Centre, Toronto, Ontario (Canada); Ruschin, M [Odette Cancer Centre, Toronto, ON (Canada); Karam, I [Sunnybrook Odette Cancer Center, Toronto, Ontario (Canada); Sahgal, A [University of Toronto, Toronto, ON (Canada)

    2015-06-15

    Purpose: Automatically derive electron density