WorldWideScience

Sample records for multiresolution hough transform

  1. Guaranteed convergence of the Hough transform

    Science.gov (United States)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  2. Feature Extraction Using the Hough Transform

    OpenAIRE

    Ferguson, Tara; Baker, Doran

    2002-01-01

    This paper contains a brief literature survey of applications and improvements of the Hough transform, a description of the Hough transform and a few of its algorithms, and simulation examples of line and curve detection using the Hough transform.

  3. Hough transform search for continuous gravitational waves

    International Nuclear Information System (INIS)

    Krishnan, Badri; Papa, Maria Alessandra; Sintes, Alicia M.; Schutz, Bernard F.; Frasca, Sergio; Palomba, Cristiano

    2004-01-01

    This paper describes an incoherent method to search for continuous gravitational waves based on the Hough transform, a well-known technique used for detecting patterns in digital images. We apply the Hough transform to detect patterns in the time-frequency plane of the data produced by an earth-based gravitational wave detector. Two different flavors of searches will be considered, depending on the type of input to the Hough transform: either Fourier transforms of the detector data or the output of a coherent matched-filtering type search. We present the technical details for implementing the Hough transform algorithm for both kinds of searches, their statistical properties, and their sensitivities

  4. Neutrosophic Hough Transform

    Directory of Open Access Journals (Sweden)

    Ümit Budak

    2017-12-01

    Full Text Available Hough transform (HT is a useful tool for both pattern recognition and image processing communities. In the view of pattern recognition, it can extract unique features for description of various shapes, such as lines, circles, ellipses, and etc. In the view of image processing, a dozen of applications can be handled with HT, such as lane detection for autonomous cars, blood cell detection in microscope images, and so on. As HT is a straight forward shape detector in a given image, its shape detection ability is low in noisy images. To alleviate its weakness on noisy images and improve its shape detection performance, in this paper, we proposed neutrosophic Hough transform (NHT. As it was proved earlier, neutrosophy theory based image processing applications were successful in noisy environments. To this end, the Hough space is initially transferred into the NS domain by calculating the NS membership triples (T, I, and F. An indeterminacy filtering is constructed where the neighborhood information is used in order to remove the indeterminacy in the spatial neighborhood of neutrosophic Hough space. The potential peaks are detected based on thresholding on the neutrosophic Hough space, and these peak locations are then used to detect the lines in the image domain. Extensive experiments on noisy and noise-free images are performed in order to show the efficiency of the proposed NHT algorithm. We also compared our proposed NHT with traditional HT and fuzzy HT methods on variety of images. The obtained results showed the efficiency of the proposed NHT on noisy images.

  5. Road Detection by Using a Generalized Hough Transform

    Directory of Open Access Journals (Sweden)

    Weifeng Liu

    2017-06-01

    Full Text Available Road detection plays key roles for remote sensing image analytics. Hough transform (HT is one very typical method for road detection, especially for straight line road detection. Although many variants of Hough transform have been reported, it is still a great challenge to develop a low computational complexity and time-saving Hough transform algorithm. In this paper, we propose a generalized Hough transform (i.e., Radon transform implementation for road detection in remote sensing images. Specifically, we present a dictionary learning method to approximate the Radon transform. The proposed approximation method treats a Radon transform as a linear transform, which then facilitates parallel implementation of the Radon transform for multiple images. To evaluate the proposed algorithm, we conduct extensive experiments on the popular RSSCN7 database for straight road detection. The experimental results demonstrate that our method is superior to the traditional algorithms in terms of accuracy and computing complexity.

  6. Locating An IRIS From Image Using Canny And Hough Transform

    Directory of Open Access Journals (Sweden)

    Poorvi Bhatt

    2017-11-01

    Full Text Available Iris recognition a relatively new biometric technology has great advantages such as variability stability and security thus it is the most promising for high security environments. The proposed system here is a simple system design and implemented to find the iris from the image using Hough Transform Algorithm. Canny Edge detector has been used to get edge image to use it as an input to the Hough Transform. To get the general idea of Hough Transform the Hough Transform for circle is also implemented. RGB value of 3-D accumulator array of peaks of inner circle and outer circle has been performed. And at the end some suggestions are made to improve the system and performance gets discussed.

  7. Parallel Monte Carlo Search for Hough Transform

    Science.gov (United States)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  8. An improved Hough transform-based fingerprint alignment approach

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-11-01

    Full Text Available An improved Hough Transform based fingerprint alignment approach is presented, which improves computing time and memory usage with accurate alignment parameter (rotation and translation) results. This is achieved by studying the strengths...

  9. Mobile robot motion estimation using Hough transform

    Science.gov (United States)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  10. Vanishing points detection using combination of fast Hough transform and deep learning

    Science.gov (United States)

    Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry

    2018-04-01

    In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.

  11. Generalized Hough Transform for Object Classification in the Maritime Domain

    Science.gov (United States)

    2015-12-01

    Aziz, “Detecting mango fruits by using Randomized Hough Transform and backpropagation neural network,” in 18th Int. Conf. Inform. Visualisation...Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY 2. REPORT DATE December 2015 3. REPORT TYPE AND... used to generate a representation of the object as a Hough coordinate table by using the GHT algorithm. The table is then reformatted to a contour map

  12. Human eye localization using the modified Hough transform

    Czech Academy of Sciences Publication Activity Database

    Dobeš, M.; Martínek, J.; Skoupil, D.; Dobešová, Z.; Pospíšil, Jaroslav

    2006-01-01

    Roč. 117, - (2006), s. 468-473 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : human eye localization * modified Hough transform * eye iris and eyelid shape determination Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.585, year: 2006

  13. Detecting circumscribed lesions with the Hough transform

    Energy Technology Data Exchange (ETDEWEB)

    Groshong, B.R; Kegelmeyer, W.P., Jr

    1996-01-11

    We have designed and implemented a circumscribed lesion detection algorithm, based on the Hough Transform, which will detect zero or more approximately circular structures in a mammogram over a range of radii from a few pixels to nearly the size of the breast. We address the geometrical behavior of peaks in Hough parameter space (x,y,r) for both the true radius of a circular structure in the image (r = r{sub o}), and for the parameter r as it passes through this radius. In addition, we evaluate peaks in Hough parameter space by re-analyzing the underlying mammogram in the vicinity of the circular disk indicated by the peak. Discs suggested by the resulting peaks are accumulated in a feature image, scaled by a measure of their quality. These results are then rectified with respect to image contrast extremes and average value. The result is a feature with a continuously scaled pixel level output which suggests the likelihood that a pixel is located inside a circular structure, irrespective of the radius of the structure and overall mammogram contrast. These features are evaluated fast qualitative and quantitative performance metrics which permit circumscribed lesion detection features to be initially evaluated without a full end-to-end classification experiment.

  14. Circle Hough transform implementation for dots recognition in braille cells

    Science.gov (United States)

    Jacinto Gómez, Edwar; Montiel Ariza, Holman; Martínez Sarmiento, Fredy Hernán.

    2017-02-01

    This paper shows a technique based on CHT (Circle Hough Transform) to achieve the optical Braille recognition (OBR). Unlike other papers developed around the same topic, this one is made by using Hough Transform to process the recognition and transcription of Braille cells, proving CHT to be an appropriate technique to go over different non-systematics factors who can affect the process, as the paper type where the text to traduce is placed, some lightning factors, input image resolution and some flaws derived from the capture process, which is realized using a scanner. Tests are performed with a local database using text generated by visual nondisabled people and some transcripts by sightless people; all of this with the support of National Institute for Blind People (INCI for their Spanish acronym) placed in Colombia.

  15. A novel approach to Hough Transform for implementation in fast triggers

    Energy Technology Data Exchange (ETDEWEB)

    Pozzobon, Nicola, E-mail: nicola.pozzobon@pd.infn.it [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via F. Marzolo 8, 35131 Padova (Italy); Dipartimento di Fisica ed Astronomia “G. Galilei”, Università degli Studi di Padova, via F. Marzolo 8, 35131 Padova (Italy); Montecassiano, Fabio [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via F. Marzolo 8, 35131 Padova (Italy); Zotto, Pierluigi [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via F. Marzolo 8, 35131 Padova (Italy); Dipartimento di Fisica ed Astronomia “G. Galilei”, Università degli Studi di Padova, via F. Marzolo 8, 35131 Padova (Italy)

    2016-10-21

    Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled.

  16. A novel approach to Hough Transform for implementation in fast triggers

    International Nuclear Information System (INIS)

    Pozzobon, Nicola; Montecassiano, Fabio; Zotto, Pierluigi

    2016-01-01

    Telescopes of position sensitive detectors are common layouts in charged particles tracking, and programmable logic devices, such as FPGAs, represent a viable choice for the real-time reconstruction of track segments in such detector arrays. A compact implementation of the Hough Transform for fast triggers in High Energy Physics, exploiting a parameter reduction method, is proposed, targeting the reduction of the needed storage or computing resources in current, or next future, state-of-the-art FPGA devices, while retaining high resolution over a wide range of track parameters. The proposed approach is compared to a Standard Hough Transform with particular emphasis on their application to muon detectors. In both cases, an original readout implementation is modeled.

  17. EFFECTIVE MULTI-RESOLUTION TRANSFORM IDENTIFICATION FOR CHARACTERIZATION AND CLASSIFICATION OF TEXTURE GROUPS

    Directory of Open Access Journals (Sweden)

    S. Arivazhagan

    2011-11-01

    Full Text Available Texture classification is important in applications of computer image analysis for characterization or classification of images based on local spatial variations of intensity or color. Texture can be defined as consisting of mutually related elements. This paper proposes an experimental approach for identification of suitable multi-resolution transform for characterization and classification of different texture groups based on statistical and co-occurrence features derived from multi-resolution transformed sub bands. The statistical and co-occurrence feature sets are extracted for various multi-resolution transforms such as Discrete Wavelet Transform (DWT, Stationary Wavelet Transform (SWT, Double Density Wavelet Transform (DDWT and Dual Tree Complex Wavelet Transform (DTCWT and then, the transform that maximizes the texture classification performance for the particular texture group is identified.

  18. Lane detection using Randomized Hough Transform

    Science.gov (United States)

    Mongkonyong, Peerawat; Nuthong, Chaiwat; Siddhichai, Supakorn; Yamakita, Masaki

    2018-01-01

    According to the report of the Royal Thai Police between 2006 and 2015, lane changing without consciousness is one of the most accident causes. To solve this problem, many methods are considered. Lane Departure Warning System (LDWS) is considered to be one of the potential solutions. LDWS is a mechanism designed to warn the driver when the vehicle begins to move out of its current lane. LDWS contains many parts including lane boundary detection, driver warning and lane marker tracking. This article focuses on the lane boundary detection part. The proposed lane boundary detection detects the lines of the image from the input video and selects the lane marker of the road surface from those lines. Standard Hough Transform (SHT) and Randomized Hough Transform (RHT) are considered in this article. They are used to extract lines of an image. SHT extracts the lines from all of the edge pixels. RHT extracts only the lines randomly picked by the point pairs from edge pixels. RHT algorithm reduces the time and memory usage when compared with SHT. The increase of the threshold value in RHT will increase the voted limit of the line that has a high possibility to be the lane marker, but it also consumes the time and memory. By comparison between SHT and RHT with the different threshold values, 500 frames of input video from the front car camera will be processed. The accuracy and the computational time of RHT are similar to those of SHT in the result of the comparison.

  19. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    Science.gov (United States)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  20. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    Science.gov (United States)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  1. Hough transform methods used for object detection

    International Nuclear Information System (INIS)

    Qussay A Salih; Abdul Rahman Ramli; Md Mahmud Hassan Prakash

    2001-01-01

    The Hough transform (HT) is a robust parameter estimator of multi-dimensional features in images. The HT is an established technique which evidences a shape by mapping image edge points into a parameter space. The HT is technique which is used to isolate curves of a give shape in an image. The classical HT requires that the curve be specified in some parametric from and, hence is most commonly used in the detection of regular curves. The HT has been generalized so that it is capable of detecting arbitrary curved shapes. The main advantage of this transform technique is that it is very tolerant of gaps in the actual object boundaries the classical HT for the detection of line , we will indicate how it can be applied to the detection of arbitrary shapes. Sometimes the straight line HT is efficient enough to detect features such as artificial curves. The HT is an established technique for extracting geometric shapes based on the duality definition of the points on a curve and their parameters. This technique has been developed for extracting simple geometric shapes such as lines, circles and ellipses as well as arbitrary shapes. The HT provides robustness against discontinuous or missing features, points or edges are mapped into a partitioned parameter of Hough space as individual votes where peaks denote the feature of interest represented in a non-analytically tabular form. The main drawback of the HT technique is the computational requirement which has an exponential growth of memory space and processing time as the number of parameters used to represent a primitive increases. For this reason most of the research on the HT has focused on reducing the computational burden for extracting of arbitrary shapes under more general transformations include a overview of describing the methods for the detection image processing programs are frequently required to detect and particle classification in an industrial setting, a standard algorithms for this detection lines

  2. Track recognition in 4 μs by a systolic trigger processor using a parallel Hough transform

    International Nuclear Information System (INIS)

    Klefenz, F.; Noffz, K.H.; Conen, W.; Zoz, R.; Kugel, A.; Maenner, R.; Univ. Heidelberg

    1993-01-01

    A parallel Hough transform processor has been developed that identifies circular particle tracks in a 2D projection of the OPAL jet chamber. The high-speed requirements imposed by the 8 bunch crossing mode of LEP could be fulfilled by computing the starting angle and the radius of curvature for each well defined track in less than 4 μs. The system consists of a Hough transform processor that determines well defined tracks, and a Euler processor that counts their number by applying the Euler relation to the thresholded result of the Hough transform. A prototype of a systolic processor has been built that handles one sector of the jet chamber. It consists of 35 x 32 processing elements that were loaded into 21 programmable gate arrays (XILINX). This processor runs at a clock rate of 40 MHz. It has been tested offline with about 1,000 original OPAL events. No deviations from the off-line simulation have been found. A trigger efficiency of 93% has been obtained. The prototype together with the associated drift time measurement unit has been installed at the OPAL detector at LEP and 100k events have been sampled to evaluate the system under detector conditions

  3. The fuzzy Hough Transform-feature extraction in medical images

    International Nuclear Information System (INIS)

    Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B.; McPherson, D.D.; Gotteiner, N.L.

    1994-01-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications

  4. ANNSVM: A Novel Method for Graph-Type Classification by Utilization of Fourier Transformation, Wavelet Transformation, and Hough Transformation

    Directory of Open Access Journals (Sweden)

    Sarunya Kanjanawattana

    2017-07-01

    Full Text Available Image classification plays a vital role in many areas of study, such as data mining and image processing; however, serious problems collectively referred to as the course of dimensionality have been encountered in previous studies as factors that reduce system performance. Furthermore, we also confront the problem of different graph characteristics even if graphs belong to same types. In this study, we propose a novel method of graph-type classification. Using our approach, we open up a new solution of high-dimensional images and address problems of different characteristics by converting graph images to one dimension with a discrete Fourier transformation and creating numeric datasets using wavelet and Hough transformations. Moreover, we introduce a new classifier, which is a combination between artificial neuron networks (ANNs and support vector machines (SVMs, which we call ANNSVM, to enhance accuracy. The objectives of our study are to propose an effective graph-type classification method that includes finding a new data representative used for classification instead of two-dimensional images and to investigate what features make our data separable. To evaluate the method of our study, we conducted five experiments with different methods and datasets. The input dataset we focused on was a numeric dataset containing wavelet coefficients and outputs of a Hough transformation. From our experimental results, we observed that the highest accuracy was provided using our method with Coiflet 1, which achieved a 0.91 accuracy.

  5. Polar exponential sensor arrays unify iconic and Hough space representation

    Science.gov (United States)

    Weiman, Carl F. R.

    1990-01-01

    The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.

  6. Improved Hough search for gravitational wave pulsars

    International Nuclear Information System (INIS)

    Sintes, Alicia M; Krishnan, Badri

    2006-01-01

    We describe an improved version of the Hough transform search for continuous gravitational waves from isolated neutron stars assuming the input to be short segments of Fourier transformed data. The method presented here takes into account possible nonstationarities of the detector noise and the amplitude modulation due to the motion of the detector. These two effects are taken into account for the first stage only, i.e. the peak selection, to create the time-frequency map of our data, while the Hough transform itself is performed in the standard way

  7. Evolved Multiresolution Transforms for Optimized Image Compression and Reconstruction Under Quantization

    National Research Council Canada - National Science Library

    Moore, Frank

    2005-01-01

    ...) First, this research demonstrates that a GA can evolve a single set of coefficients describing a single matched forward and inverse transform pair that can be used at each level of a multiresolution...

  8. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures.

    Science.gov (United States)

    Guan, Jungang; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Mattausch, Hans Jürgen

    2017-01-30

    The Hough Transform (HT) is a method for extracting straight lines from an edge image. The main limitations of the HT for usage in actual applications are computation time and storage requirements. This paper reports a hardware architecture for HT implementation on a Field Programmable Gate Array (FPGA) with parallelized voting procedure. The 2-dimensional accumulator array, namely the Hough space in parametric form (ρ, θ), for computing the strength of each line by a voting mechanism is mapped on a 1-dimensional array with regular increments of θ. Then, this Hough space is divided into a number of parallel parts. The computation of (ρ, θ) for the edge pixels and the voting procedure for straight-line determination are therefore executable in parallel. In addition, a synchronized initialization for the Hough space further increases the speed of straight-line detection, so that XGA video processing becomes possible. The designed prototype system has been synthesized on a DE4 platform with a Stratix-IV FPGA device. In the application of road-lane detection, the average processing speed of this HT implementation is 5.4ms per XGA-frame at 200 MHz working frequency.

  9. W-transform method for feature-oriented multiresolution image retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, M.K.; Lin, B. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-07-01

    Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.

  10. Application of generalized Hough transform for detecting sugar beet plant from weed using machine vision method

    Directory of Open Access Journals (Sweden)

    A Bakhshipour Ziaratgahi

    2017-05-01

    Full Text Available Introduction Sugar beet (Beta vulgaris L. as the second most important world’s sugar source after sugarcane is one of the major industrial crops. The presence of weeds in sugar beet fields, especially at early growth stages, results in a substantial decrease in the crop yield. It is very important to efficiently eliminate weeds at early growing stages. The first step of precision weed control is accurate detection of weeds location in the field. This operation can be performed by machine vision techniques. Hough transform is one of the shape feature extraction methods for object tracking in image processing which is basically used to identify lines or other geometrical shapes in an image. Generalized Hough transform (GHT is a modified version of the Hough transform used not only for geometrical forms, but also for detecting any arbitrary shape. This method is based on a pattern matching principle that uses a set of vectors of feature points (usually object edge points to a reference point to construct a pattern. By comparing this pattern with a set pattern, the desired shape is detected. The aim of this study was to identify the sugar beet plant from some common weeds in a field using the GHT. Materials and Methods Images required for this study were taken at the four-leaf stage of sugar beet as the beginning of the critical period of weed control. A shelter was used to avoid direct sunlight and prevent leaf shadows on each other. The obtained images were then introduced to the Image Processing Toolbox of MATLAB programming software for further processing. Green and Red color components were extracted from primary RGB images. In the first step, binary images were obtained by applying the optimal threshold on the G-R images. A comprehensive study of several sugar beet images revealed that there is a unique feature in sugar beet leaves which makes them differentiable from the weeds. The feature observed in all sugar beet plants at the four

  11. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    Directory of Open Access Journals (Sweden)

    Nam Ling

    2013-07-01

    Full Text Available Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  12. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    Science.gov (United States)

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  13. Circular defects detection in welded joints using circular hough transform

    International Nuclear Information System (INIS)

    Hafizal Yazid; Mohd Harun; Shukri Mohd; Abdul Aziz Mohamed; Shaharudin Sayuti; Muhamad Daud

    2007-01-01

    Conventional radiography is one of the common non-destructive testing which employs manual image interpretation. The interpretation is very subjective and depends much on the inspector experience and working conditions. It is therefore useful to have pattern recognition system in order to assist human interpreter in evaluating the quality of the radiograph sample, especially radiographic image of welded joint. This paper describes a system to detect circular discontinuities that is present in the joints. The system utilizes together 2 different algorithms, which is separability filter to identify the best object candidate and Circular Hough Transform to detect the present of circular shape. The result of the experiment shows a promising output in recognition of circular discontinuities in a radiographic image. This is based on 81.82-100% of radiography film with successful circular detection by using template movement of 10 pixels. (author)

  14. Searching for continuous gravitational wave signals. The hierarchical Hough transform algorithm

    International Nuclear Information System (INIS)

    Papa, M.; Schutz, B.F.; Sintes, A.M.

    2001-01-01

    It is well known that matched filtering techniques cannot be applied for searching extensive parameter space volumes for continuous gravitational wave signals. This is the reason why alternative strategies are being pursued. Hierarchical strategies are best at investigating a large parameter space when there exist computational power constraints. Algorithms of this kind are being implemented by all the groups that are developing software for analyzing the data of the gravitational wave detectors that will come online in the next years. In this talk I will report about the hierarchical Hough transform method that the GEO 600 data analysis team at the Albert Einstein Institute is developing. The three step hierarchical algorithm has been described elsewhere [8]. In this talk I will focus on some of the implementational aspects we are currently concerned with. (author)

  15. Implementation of an automated assessment system of the Winston-Lutz test based on the transformed generalized Hough; Implementacion de un sistema de evaluacion automatizada del test de Winston-Lutz basado en la transformada generalizada de Hough

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Viera Cueto, J. A.; Moreno Saiz, C.; Benitez Villegas, E. M.; Fernandez Canadillas, M. J.; Caballero Lucena, E.; Cantero Carrillo, M.

    2013-07-01

    It has implemented a software tool based on the generalized Hough transform to automate the evaluation of test WL This method provides a quantitative evaluation of the test. It also eliminates the subjectivity of the evaluator which is an uncertainty of 0.3 mm. (Author)

  16. Implementation of the Hough transform for 3D recognition of the straight tracks in drift chambers

    International Nuclear Information System (INIS)

    Bel'kov, A.A.

    2001-01-01

    This work is devoted to the development of the method for 3D reconstruction of the charged-particle straight tracks in the tracking systems consisting of the drift-chamber stereo layers. The method is based on the modified Hough transform with taking into account the measurements of drift distance. The proposed program realization of the method provides the time-consuming optimization of event processing, the stable performance of the algorithm and high efficiency of the track recognition under large track-occupancy of the detector as well as under high level of noisy and dead channels

  17. Implementation of the Hough Transform for 3D Recognition of the Straight Tracks in Drift Chambers

    CERN Document Server

    Belkov, A A

    2001-01-01

    This work is devoted to the development of the method for 3D reconstruction of the charged-particle straight tracks in the tracking systems consisting of the drift-chamber stereo layers. The method is based on the modified Hough transform with taking into account the measurements of drift distance. The proposed program realization of the method provides the time-consuming optimization of event processing, the stable performance of algorithm and high efficiency of the track recognition under large track-occupancy of detector as well as under high level of noisy and dead channels.

  18. Determination of mango fruit from binary image using randomized Hough transform

    Science.gov (United States)

    Rizon, Mohamed; Najihah Yusri, Nurul Ain; Abdul Kadir, Mohd Fadzil; bin Mamat, Abd. Rasid; Abd Aziz, Azim Zaliha; Nanaa, Kutiba

    2015-12-01

    A method of detecting mango fruit from RGB input image is proposed in this research. From the input image, the image is processed to obtain the binary image using the texture analysis and morphological operations (dilation and erosion). Later, the Randomized Hough Transform (RHT) method is used to find the best ellipse fits to each binary region. By using the texture analysis, the system can detect the mango fruit that is partially overlapped with each other and mango fruit that is partially occluded by the leaves. The combination of texture analysis and morphological operator can isolate the partially overlapped fruit and fruit that are partially occluded by leaves. The parameters derived from RHT method was used to calculate the center of the ellipse. The center of the ellipse acts as the gripping point for the fruit picking robot. As the results, the rate of detection was up to 95% for fruit that is partially overlapped and partially covered by leaves.

  19. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  20. Partial fingerprint identification algorithm based on the modified generalized Hough transform on mobile device

    Science.gov (United States)

    Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande

    2018-04-01

    Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.

  1. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...

  2. Tracking with the Hough transformation for the central drift chamber of the GSI 4πexperiment

    International Nuclear Information System (INIS)

    Best, D.

    1993-02-01

    The adaptive Hough Transformation (AHT) treated in this thesis is a method to localize the peaks in the Hough field without calculating the background in detail. It applies an intelligent histogram and search strategy. It uses a small accumulator and decomposes the parameter region, which is momentaneously of interest, into few intervals, into which the HT maps the hits. The information in the accumulator is then used to redefine the parameter region, so that interesting regions can be studied with higher resolution. The iteration continues, until the parameters are determined with the wanted resolution. In the mean 4-7 iterations are necessary in order to obtain the center coordinates of a circular track up to 1 mm accurately. It was shown that the AHT extends the tracking possibilities to very high track densities. The time consumation for 100 tracks with track and vertex fitting lies in the range of 4-5 seconds. By this method in comparison to local procedures in this region of track multiplicities is be proved as superior, because it is not confronted with combinatorical difficulties. Thereby the track and point removal efficiency remains at above 95%, and the double-track resolution at 1%. The dominant majority of the particle tracks is almost completely reconstructed. (orig./HSI) [de

  3. Multi-Resolution Wavelet-Transformed Image Analysis of Histological Sections of Breast Carcinomas

    Directory of Open Access Journals (Sweden)

    Hae-Gil Hwang

    2005-01-01

    Full Text Available Multi-resolution images of histological sections of breast cancer tissue were analyzed using texture features of Haar- and Daubechies transform wavelets. Tissue samples analyzed were from ductal regions of the breast and included benign ductal hyperplasia, ductal carcinoma in situ (DCIS, and invasive ductal carcinoma (CA. To assess the correlation between computerized image analysis and visual analysis by a pathologist, we created a two-step classification system based on feature extraction and classification. In the feature extraction step, we extracted texture features from wavelet-transformed images at 10× magnification. In the classification step, we applied two types of classifiers to the extracted features, namely a statistics-based multivariate (discriminant analysis and a neural network. Using features from second-level Haar transform wavelet images in combination with discriminant analysis, we obtained classification accuracies of 96.67 and 87.78% for the training and testing set (90 images each, respectively. We conclude that the best classifier of carcinomas in histological sections of breast tissue are the texture features from the second-level Haar transform wavelet images used in a discriminant function.

  4. Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer

    2016-01-01

    An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm.

  5. ATMS software: Fuzzy Hough Transform in a hybrid algorithm for counting the overlapped etched tracks and orientation recognition

    International Nuclear Information System (INIS)

    Khayat, O.; Ghergherehchi, M.; Afarideh, H.; Durrani, S.A.; Pouyan, Ali A.; Kim, Y.S.

    2013-01-01

    A computer program named ATMS written in MATLAB and running with a friendly interface has been developed for recognition and parametric measurements of etched tracks in images captured from the surface of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of etched tracks and depending on the current working mode classifies them according to their radii (small object removal) or their axis (non-perpendicular or non-circular etched tracks), their mean intensity value and their orientation through the minor and major axes. Images of the detectors' surfaces are input to the code, which generates text and figure files as output, including the number of counted etched tracks with the associated track parameters, histograms and a figure showing edge and center of detected etched tracks. ATMS code is running hierarchically as calibration, testing and measurement modes to demonstrate the reliability, repeatability and adaptability. Fuzzy Hough Transform is used for the estimation of the number of etched tracks and their parameters, providing results even in cases that overlapping and orientation occur. ATMS code is finally converted to a standalone file which makes it able to run out of MATLAB environment. - Highlights: ► Presenting a novel code named ATMS for nuclear track measurements. ► Execution in three modes for generality, adaptability and reliability. ► Using Fuzzy Hough Transform for overlapping detection and orientation recognition. ► Using DFT as a filter for noise removal process in track images. ► Processing the noisy track images and demonstration of the presented code

  6. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis

    Science.gov (United States)

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-01-01

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502

  7. Laser spot tracking based on modified circular Hough transform and motion pattern analysis.

    Science.gov (United States)

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-10-27

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.

  8. Automated Spatiotemporal Analysis of Fibrils and Coronal Rain Using the Rolling Hough Transform

    Science.gov (United States)

    Schad, Thomas

    2017-09-01

    A technique is presented that automates the direction characterization of curvilinear features in multidimensional solar imaging datasets. It is an extension of the Rolling Hough Transform (RHT) technique presented by Clark, Peek, and Putman ( Astrophys. J. 789, 82, 2014), and it excels at rapid quantification of spatial and spatiotemporal feature orientation even for applications with a low signal-to-noise ratio. It operates on a pixel-by-pixel basis within a dataset and reliably quantifies orientation even for locations not centered on a feature ridge, which is used here to derive a quasi-continuous map of the chromospheric fine-structure projection angle. For time-series analysis, a procedure is developed that uses a hierarchical application of the RHT to automatically derive the apparent motion of coronal rain observed off-limb. Essential to the success of this technique is the formulation presented in this article for the RHT error analysis as it provides a means to properly filter results.

  9. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qiu Wu [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Yuchi Ming; Ding Mingyue [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Tessier, David; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario N6A 5K8 (Canada)

    2013-04-15

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions

  10. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    International Nuclear Information System (INIS)

    Qiu Wu; Yuchi Ming; Ding Mingyue; Tessier, David; Fenster, Aaron

    2013-01-01

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 × 376 × 630 voxels. Conclusions: The proposed needle segmentation

  11. Probabilistic active recognition of multiple objects using Hough-based geometric matching features

    CSIR Research Space (South Africa)

    Govender, N

    2015-01-01

    Full Text Available be recognized simultaneously, and occlusion and clutter (through distracter objects) is common. We propose a representation for object viewpoints using Hough transform based geometric matching features, which are robust in such circumstances. We show how...

  12. A multiresolution model of rhythmic expectancy

    NARCIS (Netherlands)

    Smith, L.M.; Honing, H.; Miyazaki, K.; Hiraga, Y.; Adachi, M.; Nakajima, Y.; Tsuzaki, M.

    2008-01-01

    We describe a computational model of rhythmic cognition that predicts expected onset times. A dynamic representation of musical rhythm, the multiresolution analysis using the continuous wavelet transform is used. This representation decomposes the temporal structure of a musical rhythm into time

  13. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N

    1992-01-01

    This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos

  14. Magnetically aligned H I fibers and the rolling hough transform

    Energy Technology Data Exchange (ETDEWEB)

    Clark, S. E.; Putman, M. E.; Peek, J. E. G. [Department of Astronomy, Columbia University, New York, NY (United States)

    2014-07-01

    We present observations of a new group of structures in the diffuse Galactic interstellar medium (ISM): slender, linear H I features we dub 'fibers' that extend for many degrees at high Galactic latitude. To characterize and measure the extent and strength of these fibers, we present the Rolling Hough Transform, a new machine vision method for parameterizing the coherent linearity of structures in the image plane. With this powerful new tool we show that the fibers are oriented along the interstellar magnetic field as probed by starlight polarization. We find that these low column density (N{sub H} {sub I}≃5×10{sup 18} cm{sup –2}) fiber features are most likely a component of the local cavity wall, about 100 pc away. The H I data we use to demonstrate this alignment at high latitude are from the Galactic Arecibo L-Band Feed Array H I (GALFA-H I) Survey and the Parkes Galactic All Sky Survey. We find better alignment in the higher resolution GALFA-H I data, where the fibers are more visually evident. This trend continues in our investigation of magnetically aligned linear features in the Riegel-Crutcher H I cold cloud, detected in the Southern Galactic Plane Survey. We propose an application of the RHT for estimating the field strength in such a cloud, based on the Chandrasekhar-Fermi method. We conclude that data-driven, quantitative studies of ISM morphology can be very powerful predictors of underlying physical quantities.

  15. Robust Detection of Moving Human Target in Foliage-Penetration Environment Based on Hough Transform

    Directory of Open Access Journals (Sweden)

    P. Lei

    2014-04-01

    Full Text Available Attention has been focused on the robust moving human target detection in foliage-penetration environment, which presents a formidable task in a radar system because foliage is a rich scattering environment with complex multipath propagation and time-varying clutter. Generally, multiple-bounce returns and clutter are additionally superposed to direct-scatter echoes. They obscure true target echo and lead to poor visual quality time-range image, making target detection particular difficult. Consequently, an innovative approach is proposed to suppress clutter and mitigate multipath effects. In particular, a clutter suppression technique based on range alignment is firstly applied to suppress the time-varying clutter and the instable antenna coupling. Then entropy weighted coherent integration (EWCI algorithm is adopted to mitigate the multipath effects. In consequence, the proposed method effectively reduces the clutter and ghosting artifacts considerably. Based on the high visual quality image, the target trajectory is detected robustly and the radial velocity is estimated accurately with the Hough transform (HT. Real data used in the experimental results are provided to verify the proposed method.

  16. First all-sky upper limits from LIGO on the strength of periodic gravitational waves using the Hough transform

    International Nuclear Information System (INIS)

    Abbott, B.; Adhikari, R.; Agresti, J.; Anderson, S.B.; Araya, M.; Armandula, H.; Asiri, F.; Barish, B.C.; Barnes, M.; Barton, M.A.; Bhawal, B.; Billingsley, G.; Black, E.; Blackburn, K.; Bork, R.; Brown, D.A.; Busby, D.; Cardenas, L.; Chandler, A.; Chapsky, J.

    2005-01-01

    We perform a wide parameter-space search for continuous gravitational waves over the whole sky and over a large range of values of the frequency and the first spin-down parameter. Our search method is based on the Hough transform, which is a semicoherent, computationally efficient, and robust pattern recognition technique. We apply this technique to data from the second science run of the LIGO detectors and our final results are all-sky upper limits on the strength of gravitational waves emitted by unknown isolated spinning neutron stars on a set of narrow frequency bands in the range 200-400 Hz. The best upper limit on the gravitational-wave strain amplitude that we obtain in this frequency range is 4.43x10 -23

  17. Generalized Hough transform based time invariant action recognition with 3D pose information

    Science.gov (United States)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  18. Development of a Hough transformation track finder for time projection chambers

    International Nuclear Information System (INIS)

    Heinze, Isa

    2013-12-01

    The International Linear Collider (ILC) is a planned particle physics experiment. One of the two detector concepts is the International Large Detector (ILD) concept for which a time projection chamber is foreseen as the main tracking device. In the ILD the particle flow concept is followed which leads to special requirements for the detector. Especially for the tracking system a very good momentum resolution is required. Several prototypes were build to prove that it is possible to build a TPC which fulfills the requirements for a TPC in the ILD. One is the Large Prototype with which different readout technologies currently under development are tested. In parallel reconstruction software is developed for the reconstruction of Large Prototype data. In this thesis the development of a track finding algorithm based on the Hough transformation is described. It can find curved tracks (with magnetic field) as well as straight tracks (without magnetic field). This package was mainly developed for Large Prototype testbeam data but was also tested on Monte Carlo simulation of tracks in the ILD TPC. Furthermore the analysis of testbeam data regarding the single point resolution is presented. The data were taken with the Large Prototype and a readout module with GEM (gas electron multiplier) amplification. For the reconstruction of these data the software package mentioned above was used. The single point resolution is directly related to the momentum resolution of the detector, thus a good single point resolution is needed to achieve a good momentum resolution.

  19. Development of a Hough transformation track finder for time projection chambers

    Energy Technology Data Exchange (ETDEWEB)

    Heinze, Isa

    2013-12-15

    The International Linear Collider (ILC) is a planned particle physics experiment. One of the two detector concepts is the International Large Detector (ILD) concept for which a time projection chamber is foreseen as the main tracking device. In the ILD the particle flow concept is followed which leads to special requirements for the detector. Especially for the tracking system a very good momentum resolution is required. Several prototypes were build to prove that it is possible to build a TPC which fulfills the requirements for a TPC in the ILD. One is the Large Prototype with which different readout technologies currently under development are tested. In parallel reconstruction software is developed for the reconstruction of Large Prototype data. In this thesis the development of a track finding algorithm based on the Hough transformation is described. It can find curved tracks (with magnetic field) as well as straight tracks (without magnetic field). This package was mainly developed for Large Prototype testbeam data but was also tested on Monte Carlo simulation of tracks in the ILD TPC. Furthermore the analysis of testbeam data regarding the single point resolution is presented. The data were taken with the Large Prototype and a readout module with GEM (gas electron multiplier) amplification. For the reconstruction of these data the software package mentioned above was used. The single point resolution is directly related to the momentum resolution of the detector, thus a good single point resolution is needed to achieve a good momentum resolution.

  20. Automatic detection of karstic sinkholes in seismic 3D images using circular Hough transform

    International Nuclear Information System (INIS)

    Parchkoohi, Mostafa Heydari; Farajkhah, Nasser Keshavarz; Delshad, Meysam Salimi

    2015-01-01

    More than 30% of hydrocarbon reservoirs are reported in carbonates that mostly include evidence of fractures and karstification. Generally, the detection of karstic sinkholes prognosticate good quality hydrocarbon reservoirs where looser sediments fill the holes penetrating hard limestone and the overburden pressure on infill sediments is mostly tolerated by their sturdier surrounding structure. They are also useful for the detection of erosional surfaces in seismic stratigraphic studies and imply possible relative sea level fall at the time of establishment. Karstic sinkholes are identified straightforwardly by using seismic geometric attributes (e.g. coherency, curvature) in which lateral variations are much more emphasized with respect to the original 3D seismic image. Then, seismic interpreters rely on their visual skills and experience in detecting roughly round objects in seismic attribute maps. In this paper, we introduce an image processing workflow to enhance selective edges in seismic attribute volumes stemming from karstic sinkholes and finally locate them in a high quality 3D seismic image by using circular Hough transform. Afterwards, we present a case study from an on-shore oilfield in southwest Iran, in which the proposed algorithm is applied and karstic sinkholes are traced. (paper)

  1. Hough transform for clustered microcalcifications detection in full-field digital mammograms

    Science.gov (United States)

    Fanizzi, A.; Basile, T. M. A.; Losurdo, L.; Amoroso, N.; Bellotti, R.; Bottigli, U.; Dentamaro, R.; Didonna, V.; Fausto, A.; Massafra, R.; Moschetta, M.; Tamborra, P.; Tangaro, S.; La Forgia, D.

    2017-09-01

    Many screening programs use mammography as principal diagnostic tool for detecting breast cancer at a very early stage. Despite the efficacy of the mammograms in highlighting breast diseases, the detection of some lesions is still doubtless for radiologists. In particular, the extremely minute and elongated salt-like particles of microcalcifications are sometimes no larger than 0.1 mm and represent approximately half of all cancer detected by means of mammograms. Hence the need for automatic tools able to support radiologists in their work. Here, we propose a computer assisted diagnostic tool to support radiologists in identifying microcalcifications in full (native) digital mammographic images. The proposed CAD system consists of a pre-processing step, that improves contrast and reduces noise by applying Sobel edge detection algorithm and Gaussian filter, followed by a microcalcification detection step performed by exploiting the circular Hough transform. The procedure performance was tested on 200 images coming from the Breast Cancer Digital Repository (BCDR), a publicly available database. The automatically detected clusters of microcalcifications were evaluated by skilled radiologists which asses the validity of the correctly identified regions of interest as well as the system error in case of missed clustered microcalcifications. The system performance was evaluated in terms of Sensitivity and False Positives per images (FPi) rate resulting comparable to the state-of-art approaches. The proposed model was able to accurately predict the microcalcification clusters obtaining performances (sensibility = 91.78% and FPi rate = 3.99) which favorably compare to other state-of-the-art approaches.

  2. Multiresolution analysis of Bursa Malaysia KLCI time series

    Science.gov (United States)

    Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed

    2017-05-01

    In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.

  3. A multiresolution remeshed Vortex-In-Cell algorithm using patches

    DEFF Research Database (Denmark)

    Rasmussen, Johannes Tophøj; Cottet, Georges-Henri; Walther, Jens Honore

    2011-01-01

    We present a novel multiresolution Vortex-In-Cell algorithm using patches of varying resolution. The Poisson equation relating the fluid vorticity and velocity is solved using Fast Fourier Transforms subject to free space boundary conditions. Solid boundaries are implemented using the semi...

  4. Layering extraction from subsurface radargrams over Greenland and the Martian NPLD by combining wavelet analysis with Hough transforms

    Science.gov (United States)

    Xiong, Si-Ting; Muller, Jan-Peter

    2017-04-01

    Extracting lines from an imagery is a solved problem in the field of edge detection. Different to images taken by camera, radargrams are a set of radar echo profiles, which record wave energy reflected by subsurface reflectors, at each location of a radar footprint along the satellite's ground track. The radargrams record where there is a dielectric contrast caused by different deposits, and other subsurface features, such as facies, and internal distributions like porosity and fluids. Among the subsurface features, layering is an important one which reflect the sequence of seasonal or yearly deposits on the ground [1-2]. In the field of image processing, line detection methods, such as the Radon Transform or Hough Transform, are able to extract these subsurface layers from rasterised versions of the echograms. However, due to the attenuation of radar waves whilst propagating through geological media, radargrams sometimes suffer from gradient and high background noise. These attributes of radargrams cause errors in detection when conventional line detection methods are directly applied. In this study, we have developed a continuous wavelet analysis technique to be applied directly to the radar echo profiles in a radargram in order to detect segmented lines, and then a conventional line detection method, such as a Hough transform can be applied to connect these segmented lines. This processing chain is tested by using datasets from a radargram acquired by the Multi-channel Coherent Radar Depth Sounder (MCoRDS) on an airborne platform in Greenland and a radargram acquired by the SHAllow RADar (SHARAD) on board the Mars Reconnaissance Orbiter (MRO) [3] over Martian North Polar Layered Deposits (NPLD). Keywords: Subsurface mapping, Radargram, SHARAD, Greenland, Martian NPLD, Subsurface layering, line detection References: [1] Phillips, R. J., et al. "Mars north polar deposits: Stratigraphy, age, and geodynamical response." Science 320.5880 (2008): 1182-1185. [2] Cutts

  5. On frame multiresolution analysis

    DEFF Research Database (Denmark)

    Christensen, Ole

    2003-01-01

    We use the freedom in frame multiresolution analysis to construct tight wavelet frames (even in the case where the refinable function does not generate a tight frame). In cases where a frame multiresolution does not lead to a construction of a wavelet frame we show how one can nevertheless...

  6. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    Science.gov (United States)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  7. An efficient multi-resolution GA approach to dental image alignment

    Science.gov (United States)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  8. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    Science.gov (United States)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  9. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    Energy Technology Data Exchange (ETDEWEB)

    Chamana, Manohar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mather, Barry A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-19

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validation is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.

  10. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org

    2009-02-07

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation.

  11. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  12. First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC

    CERN Document Server

    Halyo, V.; Lujan, P.; Karpusenko, V.; Vladimirov, A.

    2014-04-07

    Recent innovations focused around {\\em parallel} processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's \\xphi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU, and an Intel \\x...

  13. Invisible data matrix detection with smart phone using geometric correction and Hough transform

    Science.gov (United States)

    Sun, Halit; Uysalturk, Mahir C.; Karakaya, Mahmut

    2016-04-01

    Two-dimensional data matrices are used in many different areas that provide quick and automatic data entry to the computer system. Their most common usage is to automatically read labeled products (books, medicines, food, etc.) and recognize them. In Turkey, alcohol beverages and tobacco products are labeled and tracked with the invisible data matrices for public safety and tax purposes. In this application, since data matrixes are printed on a special paper with a pigmented ink, it cannot be seen under daylight. When red LEDs are utilized for illumination and reflected light is filtered, invisible data matrices become visible and decoded by special barcode readers. Owing to their physical dimensions, price and requirement of special training to use; cheap, small sized and easily carried domestic mobile invisible data matrix reader systems are required to be delivered to every inspector in the law enforcement units. In this paper, we first developed an apparatus attached to the smartphone including a red LED light and a high pass filter. Then, we promoted an algorithm to process captured images by smartphones and to decode all information stored in the invisible data matrix images. The proposed algorithm mainly involves four stages. In the first step, data matrix code is processed by Hough transform processing to find "L" shaped pattern. In the second step, borders of the data matrix are found by using the convex hull and corner detection methods. Afterwards, distortion of invisible data matrix corrected by geometric correction technique and the size of every module is fixed in rectangular shape. Finally, the invisible data matrix is scanned line by line in the horizontal axis to decode it. Based on the results obtained from the real test images of invisible data matrix captured with a smartphone, the proposed algorithm indicates high accuracy and low error rate.

  14. A Quantitative Analysis of an EEG Epileptic Record Based on MultiresolutionWavelet Coefficients

    Directory of Open Access Journals (Sweden)

    Mariel Rosenblatt

    2014-11-01

    Full Text Available The characterization of the dynamics associated with electroencephalogram (EEG signal combining an orthogonal discrete wavelet transform analysis with quantifiers originated from information theory is reviewed. In addition, an extension of this methodology based on multiresolution quantities, called wavelet leaders, is presented. In particular, the temporal evolution of Shannon entropy and the statistical complexity evaluated with different sets of multiresolution wavelet coefficients are considered. Both methodologies are applied to the quantitative EEG time series analysis of a tonic-clonic epileptic seizure, and comparative results are presented. In particular, even when both methods describe the dynamical changes of the EEG time series, the one based on wavelet leaders presents a better time resolution.

  15. Multiresolution Analysis Adapted to Irregularly Spaced Data

    Directory of Open Access Journals (Sweden)

    Anissa Mokraoui

    2009-01-01

    Full Text Available This paper investigates the mathematical background of multiresolution analysis in the specific context where the signal is represented by irregularly sampled data at known locations. The study is related to the construction of nested piecewise polynomial multiresolution spaces represented by their corresponding orthonormal bases. Using simple spline basis orthonormalization procedures involves the construction of a large family of orthonormal spline scaling bases defined on consecutive bounded intervals. However, if no more additional conditions than those coming from multiresolution are imposed on each bounded interval, the orthonormal basis is represented by a set of discontinuous scaling functions. The spline wavelet basis also has the same problem. Moreover, the dimension of the corresponding wavelet basis increases with the spline degree. An appropriate orthonormalization procedure of the basic spline space basis, whatever the degree of the spline, allows us to (i provide continuous scaling and wavelet functions, (ii reduce the number of wavelets to only one, and (iii reduce the complexity of the filter bank. Examples of the multiresolution implementations illustrate that the main important features of the traditional multiresolution are also satisfied.

  16. Multiresolution forecasting for futures trading using wavelet decompositions.

    Science.gov (United States)

    Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B

    2001-01-01

    We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.

  17. First evaluation of the CPU, GPGPU and MIC architectures for real time particle tracking based on Hough transform at the LHC

    International Nuclear Information System (INIS)

    V Halyo, V Halyo; LeGresley, P; Lujan, P; Karpusenko, V; Vladimirov, A

    2014-01-01

    Recent innovations focused around parallel processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's Xeon Phi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on multi-core Intel i7-3770 and Intel Xeon E5-2697v2 CPUs, an NVIDIA Tesla K20c GPU, and an Intel Xeon Phi 7120 coprocessor. Preliminary time performance will be presented

  18. Compact optical processor for Hough and frequency domain features

    Science.gov (United States)

    Ott, Peter

    1996-11-01

    Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done

  19. Omstrede kwessies en teksinterne korrektiewe in Skilpoppe van Barrie Hough

    OpenAIRE

    M.J. Fritz; E.S. Van der Westhuizen

    2010-01-01

    Controversial issues and text-internal correctives in Skilpoppe by Barrie Hough This article focuses on the binary relations between controversial issues and text internal correctives by making use of examples from “Skilpoppe” (Babushka dolls) (2002) by Barrie Hough. The article starts with a discussion of controversial issues, including the four main categories, identified as violence, sexuality, politics and religion and continues briefly to the censorship as enacted before the Films an...

  20. Localization of skeletal and aortic landmarks in trauma CT data based on the discriminative generalized Hough transform

    Science.gov (United States)

    Lorenz, Cristian; Hansis, Eberhard; Weese, Jürgen; Carolus, Heike

    2016-03-01

    Computed tomography is the modality of choice for poly-trauma patients to assess rapidly skeletal and vascular integrity of the whole body. Often several scans with and without contrast medium or with different spatial resolution are acquired. Efficient reading of the resulting extensive set of image data is vital, since it is often time critical to initiate the necessary therapeutic actions. A set of automatically found landmarks can facilitate navigation in the data and enables anatomy oriented viewing. Following this intention, we selected a comprehensive set of 17 skeletal and 5 aortic landmarks. Landmark localization models for the Discriminative Generalized Hough Transform (DGHT) were automatically created based on a set of about 20 training images with ground truth landmark positions. A hierarchical setup with 4 resolution levels was used. Localization results were evaluated on a separate test set, consisting of 50 to 128 images (depending on the landmark) with available ground truth landmark locations. The image data covers a large amount of variability caused by differences of field-of-view, resolution, contrast agent, patient gender and pathologies. The median localization error for the set of aortic landmarks was 14.4 mm and for the set of skeleton landmarks 5.5 mm. Median localization errors for individual landmarks ranged from 3.0 mm to 31.0 mm. The runtime performance for the whole landmark set is about 5s on a typical PC.

  1. Circular Hough transform diffraction analysis: A software tool for automated measurement of selected area electron diffraction patterns within Digital MicrographTM

    International Nuclear Information System (INIS)

    Mitchell, D.R.G.

    2008-01-01

    A software tool (script and plugin) for computing circular Hough transforms (CHT) in Digital Micrograph TM has been developed, for the purpose of automated analysis of selected area electron diffraction patterns (SADPs) of polycrystalline materials. The CHT enables the diffraction pattern centre to be determined with sub-pixel accuracy, regardless of the exposure condition of the transmitted beam or if a beam stop is present. Radii of the diffraction rings can also be accurately measured with sub-pixel precision. If the pattern is calibrated against a known camera length, then d-spacings with an accuracy of better than 1% can be obtained. These measurements require no a priori knowledge of the pattern and very limited user interaction. The accuracy of the CHT is degraded by distortion introduced by the projector lens, and this should be minimised prior to pattern acquisition. A number of optimisations in the CHT software enable rapid processing of patterns; a typical analysis of a 1kx1k image taking just a few minutes. The CHT tool appears robust and is even able to accurately measure SADPs with very incomplete diffraction rings due to texture effects. This software tool is freely downloadable via the Internet

  2. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  3. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera; Kruger, Jens; Moller, Torsten; Hadwiger, Markus

    2014-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined

  4. Target recognition by wavelet transform

    International Nuclear Information System (INIS)

    Li Zhengdong; He Wuliang; Zheng Xiaodong; Cheng Jiayuan; Peng Wen; Pei Chunlan; Song Chen

    2002-01-01

    Wavelet transform has an important character of multi-resolution power, which presents pyramid structure, and this character coincides the way by which people distinguish object from coarse to fineness and from large to tiny. In addition to it, wavelet transform benefits to reducing image noise, simplifying calculation, and embodying target image characteristic point. A method of target recognition by wavelet transform is provided

  5. Multiresolution analysis applied to text-independent phone segmentation

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Torres, MarIa E; Rufiner, Hugo L; Esposito, Anna

    2007-01-01

    Automatic speech segmentation is of fundamental importance in different speech applications. The most common implementations are based on hidden Markov models. They use a statistical modelling of the phonetic units to align the data along a known transcription. This is an expensive and time-consuming process, because of the huge amount of data needed to train the system. Text-independent speech segmentation procedures have been developed to overcome some of these problems. These methods detect transitions in the evolution of the time-varying features that represent the speech signal. Speech representation plays a central role is the segmentation task. In this work, two new speech parameterizations based on the continuous multiresolution entropy, using Shannon entropy, and the continuous multiresolution divergence, using Kullback-Leibler distance, are proposed. These approaches have been compared with the classical Melbank parameterization. The proposed encodings increase significantly the segmentation performance. Parameterization based on the continuous multiresolution divergence shows the best results, increasing the number of correctly detected boundaries and decreasing the amount of erroneously inserted points. This suggests that the parameterization based on multiresolution information measures provide information related to acoustic features that take into account phonemic transitions

  6. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  7. The radon transform. Theory and implementation

    International Nuclear Information System (INIS)

    Toft, P.

    1996-01-01

    The subject of this Ph.D. thesis is the mathematical Radon transform, which is well suited for curve detection in digital images, and for reconstruction of tomography images. The thesis is divided into two main parts. Part I describes the Radon- and the Hough-transform and especially their discrete approximations with respect to curve parameter detection in digital images. The sampling relationships of the Radon transform is reviewed from a digital signal processing point of view. The discrete Radon transform is investigated for detection of curves, and aspects regarding the performance of the Radon transform assuming various types of noise is covered. Furthermore, a new fast scheme for estimating curve parameters is presented. Part II of the thesis describes the inverse Radon transform in 2D and 3D with focus on reconstruction of tomography images. Some of the direct reconstruction schemes are analyzed, including their discrete implementation. Furthermore, several iterative reconstruction schemes based on linear algebra are reviewed and applied for reconstruction of Positron Emission Tomography (PET) images. A new and very fast implementation of 2D iterative reconstruction methods is devised. In a more practical oriented chapter, the noise in PET images is modelled from a very large number of measurements. Several packagers for Radon- and Hough-transform based curve detection and direct/iterative 2D and 3D reconstruction have been developed and provided for free. (au) 140 refs

  8. Circular Hough transform diffraction analysis: A software tool for automated measurement of selected area electron diffraction patterns within Digital Micrograph{sup TM}

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, D.R.G. [Institute of Materials and Engineering Science, ANSTO, PMB 1, Menai, NSW 2234 (Australia)], E-mail: drm@ansto.gov.au

    2008-03-15

    A software tool (script and plugin) for computing circular Hough transforms (CHT) in Digital Micrograph{sup TM} has been developed, for the purpose of automated analysis of selected area electron diffraction patterns (SADPs) of polycrystalline materials. The CHT enables the diffraction pattern centre to be determined with sub-pixel accuracy, regardless of the exposure condition of the transmitted beam or if a beam stop is present. Radii of the diffraction rings can also be accurately measured with sub-pixel precision. If the pattern is calibrated against a known camera length, then d-spacings with an accuracy of better than 1% can be obtained. These measurements require no a priori knowledge of the pattern and very limited user interaction. The accuracy of the CHT is degraded by distortion introduced by the projector lens, and this should be minimised prior to pattern acquisition. A number of optimisations in the CHT software enable rapid processing of patterns; a typical analysis of a 1kx1k image taking just a few minutes. The CHT tool appears robust and is even able to accurately measure SADPs with very incomplete diffraction rings due to texture effects. This software tool is freely downloadable via the Internet.

  9. Adaptive multi-resolution Modularity for detecting communities in networks

    Science.gov (United States)

    Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He

    2018-02-01

    Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.

  10. A new class of morphological pyramids for multiresolution image analysis

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Asano, T; Klette, R; Ronse, C

    2003-01-01

    We study nonlinear multiresolution signal decomposition based on morphological pyramids. Motivated by a problem arising in multiresolution volume visualization, we introduce a new class of morphological pyramids. In this class the pyramidal synthesis operator always has the same form, i.e. a

  11. SU-G-IeP1-01: A Novel MRI Post-Processing Algorithm for Visualization of the Prostate LDR Brachytherapy Seeds and Calcifications Based On B0 Field Inhomogeneity Correction and Hough Transform

    Energy Technology Data Exchange (ETDEWEB)

    Nosrati, R [Reyrson University, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Soliman, A; Owrangi, A [Sunnybrook Research Institute, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); Ghugre, N [Sunnybrook Research Institute, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada); Morton, G [Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada); Pejovic-Milic, A [Reyrson University, Toronto, Ontario (Canada); Song, W [Reyrson University, Toronto, Ontario (Canada); Sunnybrook Research Institute, Toronto, Ontario (Canada); Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: This study aims at developing an MRI-only workflow for post-implant dosimetry of the prostate LDR brachytherapy seeds. The specific goal here is to develop a post-processing algorithm to produce positive contrast for the seeds and prostatic calcifications and differentiate between them on MR images. Methods: An agar-based phantom incorporating four dummy seeds (I-125) and five calcifications of different sizes (from sheep cortical bone) was constructed. Seeds were placed arbitrarily in the coronal plane. The phantom was scanned with 3T Philips Achieva MR scanner using an 8-channel head coil array. Multi-echo turbo spin echo (ME-TSE) and multi-echo gradient recalled echo (ME-GRE) sequences were acquired. Due to minimal susceptibility artifacts around seeds, ME-GRE sequence (flip angle=15; TR/TE=20/2.3/2.3; resolution=0.7×0.7×2mm3) was further processed.The induced field inhomogeneity due to the presence of titaniumencapsulated seeds was corrected using a B0 field map. B0 map was calculated using the ME-GRE sequence by calculating the phase difference at two different echo times. Initially, the product of the first echo and B0 map was calculated. The features corresponding to the seeds were then extracted in three steps: 1) the edge pixels were isolated using “Prewitt” operator; 2) the Hough transform was employed to detect ellipses approximately matching the dimensions of the seeds and 3) at the position and orientation of the detected ellipses an ellipse was drawn on the B0-corrected image. Results: The proposed B0-correction process produced positive contrast for the seeds and calcifications. The Hough transform based on Prewitt edge operator successfully identified all the seeds according to their ellipsoidal shape and dimensions in the edge image. Conclusion: The proposed post-processing algorithm successfully visualized the seeds and calcifications with positive contrast and differentiates between them according to their shapes. Further

  12. Interactive indirect illumination using adaptive multiresolution splatting.

    Science.gov (United States)

    Nichols, Greg; Wyman, Chris

    2010-01-01

    Global illumination provides a visual richness not achievable with the direct illumination models used by most interactive applications. To generate global effects, numerous approximations attempt to reduce global illumination costs to levels feasible in interactive contexts. One such approximation, reflective shadow maps, samples a shadow map to identify secondary light sources whose contributions are splatted into eye space. This splatting introduces significant overdraw that is usually reduced by artificially shrinking each splat's radius of influence. This paper introduces a new multiresolution approach for interactively splatting indirect illumination. Instead of reducing GPU fill rate by reducing splat size, we reduce fill rate by rendering splats into a multiresolution buffer. This takes advantage of the low-frequency nature of diffuse and glossy indirect lighting, allowing rendering of indirect contributions at low resolution where lighting changes slowly and at high-resolution near discontinuities. Because this multiresolution rendering occurs on a per-splat basis, we can significantly reduce fill rate without arbitrarily clipping splat contributions below a given threshold-those regions simply are rendered at a coarse resolution.

  13. Traffic Multiresolution Modeling and Consistency Analysis of Urban Expressway Based on Asynchronous Integration Strategy

    Directory of Open Access Journals (Sweden)

    Liyan Zhang

    2017-01-01

    Full Text Available The paper studies multiresolution traffic flow simulation model of urban expressway. Firstly, compared with two-level hybrid model, three-level multiresolution hybrid model has been chosen. Then, multiresolution simulation framework and integration strategies are introduced. Thirdly, the paper proposes an urban expressway multiresolution traffic simulation model by asynchronous integration strategy based on Set Theory, which includes three submodels: macromodel, mesomodel, and micromodel. After that, the applicable conditions and derivation process of the three submodels are discussed in detail. In addition, in order to simulate and evaluate the multiresolution model, “simple simulation scenario” of North-South Elevated Expressway in Shanghai has been established. The simulation results showed the following. (1 Volume-density relationships of three submodels are unanimous with detector data. (2 When traffic density is high, macromodel has a high precision and smaller error and the dispersion of results is smaller. Compared with macromodel, simulation accuracies of micromodel and mesomodel are lower but errors are bigger. (3 Multiresolution model can simulate characteristics of traffic flow, capture traffic wave, and keep the consistency of traffic state transition. Finally, the results showed that the novel multiresolution model can have higher simulation accuracy and it is feasible and effective in the real traffic simulation scenario.

  14. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N; Haddad, Paul R

    2001-01-01

    The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course

  15. Decompositions of bubbly flow PIV velocity fields using discrete wavelets multi-resolution and multi-section image method

    International Nuclear Information System (INIS)

    Choi, Je-Eun; Takei, Masahiro; Doh, Deog-Hee; Jo, Hyo-Jae; Hassan, Yassin A.; Ortiz-Villafuerte, Javier

    2008-01-01

    Currently, wavelet transforms are widely used for the analyses of particle image velocimetry (PIV) velocity vector fields. This is because the wavelet provides not only spatial information of the velocity vectors, but also of the time and frequency domains. In this study, a discrete wavelet transform is applied to real PIV images of bubbly flows. The vector fields obtained by a self-made cross-correlation PIV algorithm were used for the discrete wavelet transform. The performances of the discrete wavelet transforms were investigated by changing the level of power of discretization. The images decomposed by wavelet multi-resolution showed conspicuous characteristics of the bubbly flows for the different levels. A high spatial bubble concentrated area could be evaluated by the constructed discrete wavelet transform algorithm, in which high-leveled wavelets play dominant roles in revealing the flow characteristics

  16. Invariant Hough Random Ferns for Object Detection and Tracking

    Directory of Open Access Journals (Sweden)

    Yimin Lin

    2014-01-01

    Full Text Available This paper introduces an invariant Hough random ferns (IHRF incorporating rotation and scale invariance into the local feature description, random ferns classifier training, and Hough voting stages. It is especially suited for object detection under changes in object appearance and scale, partial occlusions, and pose variations. The efficacy of this approach is validated through experiments on a large set of challenging benchmark datasets, and the results demonstrate that the proposed method outperforms state-of-the-art conventional methods such as bounding-box-based and part-based methods. Additionally, we also propose an efficient clustering scheme based on the local patches’ appearance and their geometric relations that can provide pixel-accurate, top-down segmentations from IHRF back-projections. This refined segmentation can be used to improve the quality of online object tracking because it avoids the drifting problem. Thus, an online tracking framework based on IHRF, which is trained and updated in each frame to distinguish and segment the object from the background, is established. Finally, the experimental results on both object segmentation and long-term object tracking show that this method yields accurate and robust tracking performance in a variety of complex scenarios, especially in cases of severe occlusions and nonrigid deformations.

  17. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera

    2014-12-31

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  18. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  19. Image compression using the W-transform

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  20. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling

    Science.gov (United States)

    Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana

    2018-01-01

    This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.

  1. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell B.

    2015-11-25

    The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel

  2. Morphological pyramids in multiresolution MIP rendering of large volume data : Survey and new results

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.

    We survey and extend nonlinear signal decompositions based on morphological pyramids, and their application to multiresolution maximum intensity projection (MIP) volume rendering with progressive refinement and perfect reconstruction. The structure of the resulting multiresolution rendering

  3. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation

    Energy Technology Data Exchange (ETDEWEB)

    Mirkin, B. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory.

  4. Multiresolution persistent homology for excessively large biomolecular datasets

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin; Zhao, Zhixiong [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, Michigan 48824 (United States)

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  5. Static multiresolution grids with inline hierarchy information for cosmic ray propagation

    Energy Technology Data Exchange (ETDEWEB)

    Müller, Gero, E-mail: gero.mueller@physik.rwth-aachen.de [III. Physikalisches Institut A, RWTH Aachen University, D-52056 Aachen (Germany)

    2016-08-01

    For numerical simulations of cosmic-ray propagation fast access to static magnetic field data is required. We present a data structure for multiresolution vector grids which is optimized for fast access, low overhead and shared memory use. The hierarchy information is encoded into the grid itself, reducing the memory overhead. Benchmarks show that in certain scenarios the differences in deflections introduced by sampling the magnetic field model can be significantly reduced when using the multiresolution approach.

  6. Multiresolution with Hierarchical Modulations for Long Term Evolution of UMTS

    Directory of Open Access Journals (Sweden)

    Soares Armando

    2009-01-01

    Full Text Available In the Long Term Evolution (LTE of UMTS the Interactive Mobile TV scenario is expected to be a popular service. By using multiresolution with hierarchical modulations this service is expected to be broadcasted to larger groups achieving significant reduction in power transmission or increasing the average throughput. Interactivity in the uplink direction will not be affected by multiresolution in the downlink channels, since it will be supported by dedicated uplink channels. The presence of interactivity will allow for a certain amount of link quality feedback for groups or individuals. As a result, an optimization of the achieved throughput will be possible. In this paper system level simulations of multi-cellular networks considering broadcast/multicast transmissions using the OFDM/OFDMA based LTE technology are presented to evaluate the capacity, in terms of number of TV channels with given bit rates or total spectral efficiency and coverage. multiresolution with hierarchical modulations is presented to evaluate the achievable throughput gain compared to single resolution systems of Multimedia Broadcast/Multicast Service (MBMS standardised in Release 6.

  7. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.

    Science.gov (United States)

    Verma, Gyanendra K; Tiwary, Uma Shanker

    2014-11-15

    The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  9. On analysis of electroencephalogram by multiresolution-based energetic approach

    Science.gov (United States)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  10. Adaptive Multiresolution Methods: Practical issues on Data Structures, Implementation and Parallelization*

    Directory of Open Access Journals (Sweden)

    Bachmann M.

    2011-12-01

    Full Text Available The concept of fully adaptive multiresolution finite volume schemes has been developed and investigated during the past decade. Here grid adaptation is realized by performing a multiscale decomposition of the discrete data at hand. By means of hard thresholding the resulting multiscale data are compressed. From the remaining data a locally refined grid is constructed. The aim of the present work is to give a self-contained overview on the construction of an appropriate multiresolution analysis using biorthogonal wavelets, its efficient realization by means of hash maps using global cell identifiers and the parallelization of the multiresolution-based grid adaptation via MPI using space-filling curves. Le concept des schémas de volumes finis multi-échelles et adaptatifs a été développé et etudié pendant les dix dernières années. Ici le maillage adaptatif est réalisé en effectuant une décomposition multi-échelle des données discrètes proches. En les tronquant à l’aide d’une valeur seuil fixée, les données multi-échelles obtenues sont compressées. A partir de celles-ci, le maillage est raffiné localement. Le but de ce travail est de donner un aperçu concis de la construction d’une analyse appropriée de multiresolution utilisant les fonctions ondelettes biorthogonales, de son efficacité d’application en terme de tables de hachage en utilisant des identification globales de cellule et de la parallélisation du maillage adaptatif multirésolution via MPI à l’aide des courbes remplissantes.

  11. Single-resolution and multiresolution extended-Kalman-filter-based reconstruction approaches to optical refraction tomography.

    Science.gov (United States)

    Naik, Naren; Vasu, R M; Ananthasayanam, M R

    2010-02-20

    The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances.

  12. Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography

    Science.gov (United States)

    Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.

    1995-04-01

    This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.

  13. A multi-resolution envelope-power based model for speech intelligibility

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Ewert, Stephan D.; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well...... to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments...... with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv...

  14. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    Science.gov (United States)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  15. Multi-resolution inversion algorithm for the attenuated radon transform

    KAUST Repository

    Barbano, Paolo Emilio; Fokas, Athanasios S.

    2011-01-01

    We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed

  16. A Misleading Review of Response Bias: Comment on McGrath, Mitchell, Kim, and Hough (2010)

    Science.gov (United States)

    Rohling, Martin L.; Larrabee, Glenn J.; Greiffenstein, Manfred F.; Ben-Porath, Yossef S.; Lees-Haley, Paul; Green, Paul; Greve, Kevin W.

    2011-01-01

    In the May 2010 issue of "Psychological Bulletin," R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such…

  17. A new fractional wavelet transform

    Science.gov (United States)

    Dai, Hongzhe; Zheng, Zhibao; Wang, Wei

    2017-03-01

    The fractional Fourier transform (FRFT) is a potent tool to analyze the time-varying signal. However, it fails in locating the fractional Fourier domain (FRFD)-frequency contents which is required in some applications. A novel fractional wavelet transform (FRWT) is proposed to solve this problem. It displays the time and FRFD-frequency information jointly in the time-FRFD-frequency plane. The definition, basic properties, inverse transform and reproducing kernel of the proposed FRWT are considered. It has been shown that an FRWT with proper order corresponds to the classical wavelet transform (WT). The multiresolution analysis (MRA) associated with the developed FRWT, together with the construction of the orthogonal fractional wavelets are also presented. Three applications are discussed: the analysis of signal with time-varying frequency content, the FRFD spectrum estimation of signals that involving noise, and the construction of fractional Harr wavelet. Simulations verify the validity of the proposed FRWT.

  18. Combining nonlinear multiresolution system and vector quantization for still image compression

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  19. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir; Ben-Chen, Mirela; Gotsman, Craig

    2010-01-01

    process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel

  20. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and...

  1. Multi-Resolution Multimedia QoE Models for IPTV Applications

    Directory of Open Access Journals (Sweden)

    Prasad Calyam

    2012-01-01

    Full Text Available Internet television (IPTV is rapidly gaining popularity and is being widely deployed in content delivery networks on the Internet. In order to proactively deliver optimum user quality of experience (QoE for IPTV, service providers need to identify network bottlenecks in real time. In this paper, we develop psycho-acoustic-visual models that can predict user QoE of multimedia applications in real time based on online network status measurements. Our models are neural network based and cater to multi-resolution IPTV applications that include QCIF, QVGA, SD, and HD resolutions encoded using popular audio and video codec combinations. On the network side, our models account for jitter and loss levels, as well as router queuing disciplines: packet-ordered and time-ordered FIFO. We evaluate the performance of our multi-resolution multimedia QoE models in terms of prediction characteristics, accuracy, speed, and consistency. Our evaluation results demonstrate that the models are pertinent for real-time QoE monitoring and resource adaptation in IPTV content delivery networks.

  2. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    Science.gov (United States)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  3. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis

    DEFF Research Database (Denmark)

    Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.

    2004-01-01

    A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...

  4. Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram

    Directory of Open Access Journals (Sweden)

    Kuo-Chin Fan

    2010-01-01

    Full Text Available Average Motion Energy (AME image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH. To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach.

  5. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell Barrera

    2015-01-01

    approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i

  6. Characterizing and understanding the climatic determinism of high- to low-frequency variations in precipitation in northwestern France using a coupled wavelet multiresolution/statistical downscaling approach

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime

    2017-04-01

    Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic

  7. A multi-resolution HEALPix data structure for spherically mapped point data

    Directory of Open Access Journals (Sweden)

    Robert W. Youngren

    2017-06-01

    Full Text Available Data describing entities with locations that are points on a sphere are described as spherically mapped. Several data structures designed for spherically mapped data have been developed. One of them, known as Hierarchical Equal Area iso-Latitude Pixelization (HEALPix, partitions the sphere into twelve diamond-shaped equal-area base cells and then recursively subdivides each cell into four diamond-shaped subcells, continuing to the desired level of resolution. Twelve quadtrees, one associated with each base cell, store the data records associated with that cell and its subcells.HEALPix has been used successfully for numerous applications, notably including cosmic microwave background data analysis. However, for applications involving sparse point data HEALPix has possible drawbacks, including inefficient memory utilization, overwriting of proximate points, and return of spurious points for certain queries.A multi-resolution variant of HEALPix specifically optimized for sparse point data was developed. The new data structure allows different areas of the sphere to be subdivided at different levels of resolution. It combines HEALPix positive features with the advantages of multi-resolution, including reduced memory requirements and improved query performance.An implementation of the new Multi-Resolution HEALPix (MRH data structure was tested using spherically mapped data from four different scientific applications (warhead fragmentation trajectories, weather station locations, galaxy locations, and synthetic locations. Four types of range queries were applied to each data structure for each dataset. Compared to HEALPix, MRH used two to four orders of magnitude less memory for the same data, and on average its queries executed 72% faster. Keywords: Computer science

  8. Adaptive multiresolution Hermite-Binomial filters for image edge and texture analysis

    NARCIS (Netherlands)

    Gu, Y.H.; Katsaggelos, A.K.

    1994-01-01

    A new multiresolution image analysis approach using adaptive Hermite-Binomial filters is presented in this paper. According to the local image structural and textural properties, the analysis filter kernels are made adaptive both in their scales and orders. Applications of such an adaptive filtering

  9. MR-CDF: Managing multi-resolution scientific data

    Science.gov (United States)

    Salem, Kenneth

    1993-01-01

    MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.

  10. Multiresolution Computation of Conformal Structures of Surfaces

    Directory of Open Access Journals (Sweden)

    Xianfeng Gu

    2003-10-01

    Full Text Available An efficient multiresolution method to compute global conformal structures of nonzero genus triangle meshes is introduced. The homology, cohomology groups of meshes are computed explicitly, then a basis of harmonic one forms and a basis of holomorphic one forms are constructed. A progressive mesh is generated to represent the original surface at different resolutions. The conformal structure is computed for the coarse level first, then used as the estimation for that of the finer level, by using conjugate gradient method it can be refined to the conformal structure of the finer level.

  11. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  12. 3D shape detection of the indoor space based on 3D-Hough method

    OpenAIRE

    安齋, 達也; ANZAI, Tatsuya

    2013-01-01

    This paper describes methods for detecting the 3D shapes of the indoor space that is represented as a combination of planes such as a wall, desk, or whatnot. Detecting the planes makes it possible to perform calibration of multiple sensors and 3D mapping, and then produces various services such as the acquisition of life logs, AR interaction, and invader detection. This paper proposes and verifies three algorithms. First, it mentions a way to use2D-Hough.The proposed technique converts 3D dat...

  13. Multisensor multiresolution data fusion for improvement in classification

    Science.gov (United States)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  14. Digital Correlation based on Wavelet Transform for Image Detection

    International Nuclear Information System (INIS)

    Barba, L; Vargas, L; Torres, C; Mattos, L

    2011-01-01

    In this work is presented a method for the optimization of digital correlators to improve the characteristic detection on images using wavelet transform as well as subband filtering. It is proposed an approach of wavelet-based image contrast enhancement in order to increase the performance of digital correlators. The multiresolution representation is employed to improve the high frequency content of images taken into account the input contrast measured for the original image. The energy of correlation peaks and discrimination level of several objects are improved with this technique. To demonstrate the potentiality in extracting characteristics using the wavelet transform, small objects inside reference images are detected successfully.

  15. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation

    Science.gov (United States)

    Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.

    2012-03-01

    Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.

  16. Multi-resolution inversion algorithm for the attenuated radon transform

    KAUST Repository

    Barbano, Paolo Emilio

    2011-09-01

    We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed by combining a memory-efficient implementation of the analytical inversion formula (AIF [1], [2]) with a wavelet-based version of a recently discovered regularization technique [3]. The paper introduces all the main aspects of the new AIF, as well numerical experiments on real and simulated data. Those display a substantial improvement in reconstruction quality when compared to linear or iterative algorithms. © 2011 IEEE.

  17. Towards discrete wavelet transform-based human activity recognition

    Science.gov (United States)

    Khare, Manish; Jeon, Moongu

    2017-06-01

    Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods.

  18. Multiresolution Network Temporal and Spatial Scheduling Model of Scenic Spot

    Directory of Open Access Journals (Sweden)

    Peng Ge

    2013-01-01

    Full Text Available Tourism is one of pillar industries of the world economy. Low-carbon tourism will be the mainstream direction of the scenic spots' development, and the ω path of low-carbon tourism development is to develop economy and protect environment simultaneously. However, as the tourists' quantity is increasing, the loads of scenic spots are out of control. And the instantaneous overload in some spots caused the image phenomenon of full capacity of the whole scenic spot. Therefore, realizing the real-time schedule becomes the primary purpose of scenic spot’s management. This paper divides the tourism distribution system into several logically related subsystems and constructs a temporal and spatial multiresolution network scheduling model according to the regularity of scenic spots’ overload phenomenon in time and space. It also defines dynamic distribution probability and equivalent dynamic demand to realize the real-time prediction. We define gravitational function between fields and takes it as the utility of schedule, after resolving the transportation model of each resolution, it achieves hierarchical balance between demand and capacity of the system. The last part of the paper analyzes the time complexity of constructing a multiresolution distribution system.

  19. The gridding method for image reconstruction by Fourier transformation

    International Nuclear Information System (INIS)

    Schomberg, H.; Timmer, J.

    1995-01-01

    This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform

  20. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer

    Science.gov (United States)

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-01

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  1. Accuracy assessment of tree crown detection using local maxima and multi-resolution segmentation

    International Nuclear Information System (INIS)

    Khalid, N; Hamid, J R A; Latif, Z A

    2014-01-01

    Diversity of trees forms an important component in the forest ecosystems and needs proper inventories to assist the forest personnel in their daily activities. However, tree parameter measurements are often constrained by physical inaccessibility to site locations, high costs, and time. With the advancement in remote sensing technology, such as the provision of higher spatial and spectral resolution of imagery, a number of developed algorithms fulfil the needs of accurate tree inventories information in a cost effective and timely manner over larger forest areas. This study intends to generate tree distribution map in Ampang Forest Reserve using the Local Maxima and Multi-Resolution image segmentation algorithm. The utilization of recent worldview-2 imagery with Local Maxima and Multi-Resolution image segmentation proves to be capable of detecting and delineating the tree crown in its accurate standing position

  2. Application of a Hough Search for Continuous Gravitational Waves on Data from the Fifth LIGO Science Run

    Science.gov (United States)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Adams, C.; Adams, T.; Adhikari, R. X.; hide

    2014-01-01

    We report on an all-sky search for periodic gravitational waves in the frequency range 50-1000 Hertz with the first derivative of frequency in the range -8.9 × 10(exp -10) Hertz per second to zero in two years of data collected during LIGO's fifth science run. Our results employ a Hough transform technique, introducing a chi(sup 2) test and analysis of coincidences between the signal levels in years 1 and 2 of observations that offers a significant improvement in the product of strain sensitivity with compute cycles per data sample compared to previously published searches. Since our search yields no surviving candidates, we present results taking the form of frequency dependent, 95% confidence upper limits on the strain amplitude h(sub 0). The most stringent upper limit from year 1 is 1.0 × 10(exp -24) in the 158.00-158.25 Hertz band. In year 2, the most stringent upper limit is 8.9 × 10(exp -25) in the 146.50-146.75 Hertz band. This improved detection pipeline, which is computationally efficient by at least two orders of magnitude better than our flagship Einstein@Home search, will be important for 'quicklook' searches in the Advanced LIGO and Virgo detector era.

  3. Long-range force and moment calculations in multiresolution simulations of molecular systems

    International Nuclear Information System (INIS)

    Poursina, Mohammad; Anderson, Kurt S.

    2012-01-01

    Multiresolution simulations of molecular systems such as DNAs, RNAs, and proteins are implemented using models with different resolutions ranging from a fully atomistic model to coarse-grained molecules, or even to continuum level system descriptions. For such simulations, pairwise force calculation is a serious bottleneck which can impose a prohibitive amount of computational load on the simulation if not performed wisely. Herein, we approximate the resultant force due to long-range particle-body and body-body interactions applicable to multiresolution simulations. Since the resultant force does not necessarily act through the center of mass of the body, it creates a moment about the mass center. Although this potentially important torque is neglected in many coarse-grained models which only use particle dynamics to formulate the dynamics of the system, it should be calculated and used when coarse-grained simulations are performed in a multibody scheme. Herein, the approximation for this moment due to far-field particle-body and body-body interactions is also provided.

  4. Network coding for multi-resolution multicast

    DEFF Research Database (Denmark)

    2013-01-01

    A method, apparatus and computer program product for utilizing network coding for multi-resolution multicast is presented. A network source partitions source content into a base layer and one or more refinement layers. The network source receives a respective one or more push-back messages from one...... or more network destination receivers, the push-back messages identifying the one or more refinement layers suited for each one of the one or more network destination receivers. The network source computes a network code involving the base layer and the one or more refinement layers for at least one...... of the one or more network destination receivers, and transmits the network code to the one or more network destination receivers in accordance with the push-back messages....

  5. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  6. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    OpenAIRE

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave...

  7. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr [Argonne National Lab., IL (United States); Kenyon, R.V. [Illinois Univ., Chicago, IL (United States)

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  8. Crack Identification in CFRP Laminated Beams Using Multi-Resolution Modal Teager–Kaiser Energy under Noisy Environments

    Science.gov (United States)

    Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław

    2017-01-01

    Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager–Kaiser energy, which is the Teager–Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager–Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates. PMID:28773016

  9. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Multiresolution strategies for the numerical solution of optimal control problems

    Science.gov (United States)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  11. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2015-01-01

    Full Text Available The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI. In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII. The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  12. Spatial Quality of Manually Geocoded Multispectral and Multiresolution Mosaics

    Directory of Open Access Journals (Sweden)

    Andrija Krtalić

    2008-05-01

    Full Text Available The digital airborne multisensor and multiresolution system for collection of information (images about mine suspected area was created, within European commission project Airborne Minefield Area Reduction (ARC, EC IST-2000-25300, http://www.arc.vub.ac.be to gain a better perspective in mine suspected areas (MSP in the Republic of Croatia. The system consists of a matrix camera (visible and near infrared range of electromagnetic spectrum, 0.4-1.1 µm, thermal (thermal range of electromagnetic spectrum, 8-14 µm and a hyperspectral linear scanner. Because of a specific purpose and seeking object on the scene, the flights for collecting the images took place at heights from 130 m to 900 m above the ground. The result of a small relative flight height and large MSPs was a large number of images which cover MSPs. Therefore, the need for merging images in largest parts, for a better perspective in whole MSPs and the interaction of detected object influences on the scene appeared. The mentioned system did not dispose of the module for automatic mosaicking and geocoding, so mosaicking and after that geocoding were done manually. This process made the classification of the scene (better distinguishing of objects on the scene and fusion of multispectral and multiresolution images after that possible. Classification and image fusion can be even done by manually mosaicking and geocoding. This article demonstrated this claim.

  13. A Precise Lane Detection Algorithm Based on Top View Image Transformation and Least-Square Approaches

    Directory of Open Access Journals (Sweden)

    Byambaa Dorj

    2016-01-01

    Full Text Available The next promising key issue of the automobile development is a self-driving technique. One of the challenges for intelligent self-driving includes a lane-detecting and lane-keeping capability for advanced driver assistance systems. This paper introduces an efficient and lane detection method designed based on top view image transformation that converts an image from a front view to a top view space. After the top view image transformation, a Hough transformation technique is integrated by using a parabolic model of a curved lane in order to estimate a parametric model of the lane in the top view space. The parameters of the parabolic model are estimated by utilizing a least-square approach. The experimental results show that the newly proposed lane detection method with the top view transformation is very effective in estimating a sharp and curved lane leading to a precise self-driving capability.

  14. Paraxial diffractive elements for space-variant linear transforms

    Science.gov (United States)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  15. Discrete Fourier and wavelet transforms an introduction through linear algebra with applications to signal processing

    CERN Document Server

    Goodman, Roe W

    2016-01-01

    This textbook for undergraduate mathematics, science, and engineering students introduces the theory and applications of discrete Fourier and wavelet transforms using elementary linear algebra, without assuming prior knowledge of signal processing or advanced analysis.It explains how to use the Fourier matrix to extract frequency information from a digital signal and how to use circulant matrices to emphasize selected frequency ranges. It introduces discrete wavelet transforms for digital signals through the lifting method and illustrates through examples and computer explorations how these transforms are used in signal and image processing. Then the general theory of discrete wavelet transforms is developed via the matrix algebra of two-channel filter banks. Finally, wavelet transforms for analog signals are constructed based on filter bank results already presented, and the mathematical framework of multiresolution analysis is examined.

  16. A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes

    Science.gov (United States)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2017-07-01

    Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.

  17. Coresident sensor fusion and compression using the wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  18. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    Science.gov (United States)

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Inferring species richness and turnover by statistical multiresolution texture analysis of satellite imagery.

    Directory of Open Access Journals (Sweden)

    Matteo Convertino

    Full Text Available BACKGROUND: The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. METHODOLOGY/PRINCIPAL FINDINGS: We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL. Species turnover, or [Formula: see text] diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species

  20. Real-time Multiresolution Crosswalk Detection with Walk Light Recognition for the Blind

    Directory of Open Access Journals (Sweden)

    ROMIC, K.

    2018-02-01

    Full Text Available Real-time image processing and object detection techniques have a great potential to be applied in digital assistive tools for the blind and visually impaired persons. In this paper, algorithm for crosswalk detection and walk light recognition is proposed with the main aim to help blind person when crossing the road. The proposed algorithm is optimized to work in real-time on portable devices using standard cameras. Images captured by camera are processed while person is moving and decision about detected crosswalk is provided as an output along with the information about walk light if one is present. Crosswalk detection method is based on multiresolution morphological image processing, while the walk light recognition is performed by proposed 6-stage algorithm. The main contributions of this paper are accurate crosswalk detection with small processing time due to multiresolution processing and the recognition of the walk lights covering only small amount of pixels in image. The experiment is conducted using images from video sequences captured in realistic situations on crossings. The results show 98.3% correct crosswalk detections and 89.5% correct walk lights recognition with average processing speed of about 16 frames per second.

  1. A MULTIRESOLUTION METHOD FOR THE SIMULATION OF SEDIMENTATION IN INCLINED CHANNELS

    OpenAIRE

    Buerger, Raimund; Ruiz-Baier, Ricardo; Schneider, Kai; Torres, Hector

    2012-01-01

    An adaptive multiresolution scheme is proposed for the numerical solution of a spatially two-dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model consists in a version of the Stokes equations for incompressible fluid flow coupled with a hyperbolic conservation law for the local solids concentration. We study the process in an inclined, rectangular closed vessel, a configuration that gives rise a well-known increase of settling rat...

  2. Detection of pulmonary nodules on lung X-ray images. Studies on multi-resolutional filter and energy subtraction images

    International Nuclear Information System (INIS)

    Sawada, Akira; Sato, Yoshinobu; Kido, Shoji; Tamura, Shinichi

    1999-01-01

    The purpose of this work is to prove the effectiveness of an energy subtraction image for the detection of pulmonary nodules and the effectiveness of multi-resolutional filter on an energy subtraction image to detect pulmonary nodules. Also we study influential factors to the accuracy of detection of pulmonary nodules from viewpoints of types of images, types of digital filters and types of evaluation methods. As one type of images, we select an energy subtraction image, which removes bones such as ribs from the conventional X-ray image by utilizing the difference of X-ray absorption ratios at different energy between bones and soft tissue. Ribs and vessels are major causes of CAD errors in detection of pulmonary nodules and many researches have tried to solve this problem. So we select conventional X-ray images and energy subtraction X-ray images as types of images, and at the same time select ∇ 2 G (Laplacian of Guassian) filter, Min-DD (Minimum Directional Difference) filter and our multi-resolutional filter as types of digital filters. Also we select two evaluation methods and prove the effectiveness of an energy subtraction image, the effectiveness of Min-DD filter on a conventional X-ray image and the effectiveness of multi-resolutional filter on an energy subtraction image. (author)

  3. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    Science.gov (United States)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  4. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  5. Fourier-based quantification of renal glomeruli size using Hough transform and shape descriptors.

    Science.gov (United States)

    Najafian, Sohrab; Beigzadeh, Borhan; Riahi, Mohammad; Khadir Chamazkoti, Fatemeh; Pouramir, Mahdi

    2017-11-01

    Analysis of glomeruli geometry is important in histopathological evaluation of renal microscopic images. Due to the shape and size disparity of even glomeruli of same kidney, automatic detection of these renal objects is not an easy task. Although manual measurements are time consuming and at times are not very accurate, it is commonly used in medical centers. In this paper, a new method based on Fourier transform following usage of some shape descriptors is proposed to detect these objects and their geometrical parameters. Reaching the goal, a database of 400 regions are selected randomly. 200 regions of which are part of glomeruli and the other 200 regions are not belong to renal corpuscles. ROC curve is used to decide which descriptor could classify two groups better. f_measure, which is a combination of both tpr (true positive rate) and fpr (false positive rate), is also proposed to select optimal threshold for descriptors. Combination of three parameters (solidity, eccentricity, and also mean squared error of fitted ellipse) provided better result in terms of f_measure to distinguish desired regions. Then, Fourier transform of outer edges is calculated to form a complete curve out of separated region(s). The generality of proposed model is verified by use of cross validation method, which resulted tpr of 94%, and fpr of 5%. Calculation of glomerulus' and Bowman's space with use of the algorithm are also compared with a non-automatic measurement done by a renal pathologist, and errors of 5.9%, 5.4%, and 6.26% are resulted in calculation of Capsule area, Bowman space, and glomeruli area, respectively. Having tested different glomeruli with various shapes, the experimental consequences show robustness and reliability of our method. Therefore, it could be used to illustrate renal diseases and glomerular disorders by measuring the morphological changes accurately and expeditiously. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Multiresolution 3-D reconstruction from side-scan sonar images.

    Science.gov (United States)

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  7. A VIRTUAL GLOBE-BASED MULTI-RESOLUTION TIN SURFACE MODELING AND VISUALIZETION METHOD

    Directory of Open Access Journals (Sweden)

    X. Zheng

    2016-06-01

    Full Text Available The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.

  8. Experimental Evaluation of Integral Transformations for Engineering Drawings Vectorization

    Directory of Open Access Journals (Sweden)

    Vaský Jozef

    2014-12-01

    Full Text Available The concept of digital manufacturing supposes application of digital technologies in the whole product life cycle. Direct digital manufacturing includes such information technology processes, where products are directly manufactured from 3D CAD model. In digital manufacturing, engineering drawing is replaced by CAD product model. In the contemporary practice, lots of engineering paper-based drawings are still archived. They could be digitalized by scanner and stored to one of the raster graphics format and after that vectorized for interactive editing in the specific software system for technical drawing or for archiving in some of the standard vector graphics file format. The vector format is suitable for 3D model generating, too.The article deals with using of selected integral transformations (Fourier, Hough in the phase of digitalized raster engineering drawings vectorization.

  9. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    Science.gov (United States)

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  10. Automated visual inspection of brake shoe wear

    Science.gov (United States)

    Lu, Shengfang; Liu, Zhen; Nan, Guo; Zhang, Guangjun

    2015-10-01

    With the rapid development of high-speed railway, the automated fault inspection is necessary to ensure train's operation safety. Visual technology is paid more attention in trouble detection and maintenance. For a linear CCD camera, Image alignment is the first step in fault detection. To increase the speed of image processing, an improved scale invariant feature transform (SIFT) method is presented. The image is divided into multiple levels of different resolution. Then, we do not stop to extract the feature from the lowest resolution to the highest level until we get sufficient SIFT key points. At that level, the image is registered and aligned quickly. In the stage of inspection, we devote our efforts to finding the trouble of brake shoe, which is one of the key components in brake system on electrical multiple units train (EMU). Its pre-warning on wear limitation is very important in fault detection. In this paper, we propose an automatic inspection approach to detect the fault of brake shoe. Firstly, we use multi-resolution pyramid template matching technology to fast locate the brake shoe. Then, we employ Hough transform to detect the circles of bolts in brake region. Due to the rigid characteristic of structure, we can identify whether the brake shoe has a fault. The experiments demonstrate that the way we propose has a good performance, and can meet the need of practical applications.

  11. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    Science.gov (United States)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  12. Multi-resolution analysis using integrated microscopic configuration with local patterns for benign-malignant mass classification

    Science.gov (United States)

    Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim

    2018-02-01

    In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.

  13. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    Science.gov (United States)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  14. Design and application of discrete wavelet packet transform based multiresolution controller for liquid level system.

    Science.gov (United States)

    Paul, Rimi; Sengupta, Anindita

    2017-11-01

    A new controller based on discrete wavelet packet transform (DWPT) for liquid level system (LLS) has been presented here. This controller generates control signal using node coefficients of the error signal which interprets many implicit phenomena such as process dynamics, measurement noise and effect of external disturbances. Through simulation results on LLS problem, this controller is shown to perform faster than both the discrete wavelet transform based controller and conventional proportional integral controller. Also, it is more efficient in terms of its ability to provide better noise rejection. To overcome the wind up phenomenon by considering the saturation due to presence of actuator, anti-wind up technique is applied to the conventional PI controller and compared to the wavelet packet transform based controller. In this case also, packet controller is found better than the other ones. This similar work has been extended for analogous first order RC plant as well as second order plant also. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Multiresolution molecular mechanics: Surface effects in nanoscale materials

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Qingcheng, E-mail: qiy9@pitt.edu; To, Albert C., E-mail: albertto@pitt.edu

    2017-05-01

    Surface effects have been observed to contribute significantly to the mechanical response of nanoscale structures. The newly proposed energy-based coarse-grained atomistic method Multiresolution Molecular Mechanics (MMM) (Yang, To (2015), ) is applied to capture surface effect for nanosized structures by designing a surface summation rule SR{sup S} within the framework of MMM. Combined with previously proposed bulk summation rule SR{sup B}, the MMM summation rule SR{sup MMM} is completed. SR{sup S} and SR{sup B} are consistently formed within SR{sup MMM} for general finite element shape functions. Analogous to quadrature rules in finite element method (FEM), the key idea to the good performance of SR{sup MMM} lies in that the order or distribution of energy for coarse-grained atomistic model is mathematically derived such that the number, position and weight of quadrature-type (sampling) atoms can be determined. Mathematically, the derived energy distribution of surface area is different from that of bulk region. Physically, the difference is due to the fact that surface atoms lack neighboring bonding. As such, SR{sup S} and SR{sup B} are employed for surface and bulk domains, respectively. Two- and three-dimensional numerical examples using the respective 4-node bilinear quadrilateral, 8-node quadratic quadrilateral and 8-node hexahedral meshes are employed to verify and validate the proposed approach. It is shown that MMM with SR{sup MMM} accurately captures corner, edge and surface effects with less 0.3% degrees of freedom of the original atomistic system, compared against full atomistic simulation. The effectiveness of SR{sup MMM} with respect to high order element is also demonstrated by employing the 8-node quadratic quadrilateral to solve a beam bending problem considering surface effect. In addition, the introduced sampling error with SR{sup MMM} that is analogous to numerical integration error with quadrature rule in FEM is very small. - Highlights:

  16. Study on spillover effect of copper futures between LME and SHFE using wavelet multiresolution analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Research on information spillover effects between financial markets remains active in the economic community. A Granger-type model has recently been used to investigate the spillover between London Metal Exchange (LME) and Shanghai Futures Exchange (SHFE), however, possible correlation between the future price and return on different time scales have been ignored. In this paper, wavelet multiresolution decomposition is used to investigate the spillover effects of copper future returns between the two markets. The daily return time series are decomposed on 2n (n=1, ..., 6) frequency bands through wavelet multiresolution analysis. The correlation between the two markets is studied with decomposed data. It is shown that high frequency detail components represent much more energy than low-frequency smooth components. The relation between copper future daily returns in LME and that in SHFE are different on different time scales. The fluctuations of the copper future daily returns in LME have large effect on that in SHFE in 32-day scale, but small effect in high frequency scales. It also has evidence that strong effects exist between LME and SHFE for monthly responses of the copper futures but not for daily responses.

  17. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  18. EYE CONTROLLED SWITCHING USING CIRCULAR HOUGH TRANSFORM

    OpenAIRE

    Sagar Lakhmani

    2014-01-01

    The paper presents hands free interface between electrical appliances or devices. This technology is intended to replace conventional switching devices for the use of disabled. It is a new way to interact with the electrical or electronic devices that we use in our daily life. The paper illustrates how the movement of eye cornea and blinking can be used for switching the devices. The basic Circle Detection algorithm is used to determine the position of eye. Eye blinking is used...

  19. Weighted least squares phase unwrapping based on the wavelet transform

    Science.gov (United States)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  20. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    Science.gov (United States)

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  1. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  2. Multiresolution wavelet analysis of heartbeat intervals discriminates healthy patients from those with cardiac pathology

    OpenAIRE

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1997-01-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats (R-R intervals) and have found a scale window, between 16 and 32 heartbeats, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as either belonging to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of...

  3. A novel application of the S-transform in removing powerline interference from biomedical signals

    International Nuclear Information System (INIS)

    Huang, Chien-Chun; Young, Ming-Shing; Liang, Sheng-Fu; Shaw, Fu-Zen

    2009-01-01

    Powerline interference always disturbs recordings of biomedical signals. Numerous methods have been developed to reduce powerline interference. However, most of these techniques not only reduce the interference but also attenuate the 60 Hz power of the biomedical signals themselves. In the present study, we applied the S-transform, which provides an absolute phase of each frequency in a multi-resolution time–frequency analysis, to reduce 60 Hz interference. According to results from an electrocardiogram (ECG) to which a simulated 60 Hz noise was added, the S-transform de-noising process restored a power spectrum identical to that of the original ECG coincident with a significant reduction in the 60 Hz interference. Moreover, the S-transform de-noised the signal in an intensity-independent manner when reducing the 60 Hz interference. In both a real ECG signal from the MIT database and natural brain activity contaminated with 60 Hz interference, the S-transform also displayed superior merit to a notch filter in the aspect of reducing noise and preserving the signal. Based on these data, a novel application of the S-transform for removing powerline interference is established

  4. Exploring a Multiresolution Modeling Approach within the Shallow-Water Equations

    Energy Technology Data Exchange (ETDEWEB)

    Ringler, Todd D.; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-11-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  5. Sparse PDF maps for non-linear multi-resolution image operations

    KAUST Repository

    Hadwiger, Markus

    2012-11-01

    We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters (e.g., median or mode filters), and bilateral filters. © 2012 ACM.

  6. Biomolecular surface construction by PDE transform.

    Science.gov (United States)

    Zheng, Qiong; Yang, Siyang; Wei, Guo-Wei

    2012-03-01

    virus surface capsid. Virus surface morphologies of different resolutions are attained by adjusting the propagation time. Therefore, the present PDE transform provides a multiresolution analysis in the surface visualization. Extensive numerical experiment and comparison with an established surface model indicate that the present PDE transform is a robust, stable, and efficient approach for biomolecular surface generation in Cartesian meshes. Copyright © 2012 John Wiley & Sons, Ltd.

  7. A multiresolution image based approach for correction of partial volume effects in emission tomography

    International Nuclear Information System (INIS)

    Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D

    2006-01-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in

  8. Psychoacoustic Music Analysis Based on the Discrete Wavelet Packet Transform

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available Psychoacoustical computational models are necessary for the perceptual processing of acoustic signals and have contributed significantly in the development of highly efficient audio analysis and coding. In this paper, we present an approach for the psychoacoustic analysis of musical signals based on the discrete wavelet packet transform. The proposed method mimics the multiresolution properties of the human ear closer than other techniques and it includes simultaneous and temporal auditory masking. Experimental results show that this method provides better masking capabilities and it reduces the signal-to-masking ratio substantially more than other approaches, without introducing audible distortion. This model can lead to greater audio compression by permitting further bit rate reduction and more secure watermarking by providing greater signal space for information hiding.

  9. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  10. Suitability of an MRMCE (multi-resolution minimum cross entropy) algorithm for online monitoring of a two-phase flow

    International Nuclear Information System (INIS)

    Wang, Qi; Wang, Huaxiang; Xin, Shan

    2011-01-01

    The flow regimes are important characteristics to describe two-phase flows, and measurement of two-phase flow parameters is becoming increasingly important in many industrial processes. Computerized tomography (CT) has been applied to two-phase/multi-phase flow measurement in recent years. Image reconstruction of CT often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive, especially for the case of online flow regime identification. In this paper, minimum cross entropy reconstruction based on multi-resolution processing (MRMCE) is presented for oil–gas two-phase flow regime identification. A regularized MCE solution is obtained using the simultaneous multiplicative algebraic reconstruction technique (SMART) at a coarse resolution level, where important information on the reconstructed image is contained. Then, the solution in the finest resolution is obtained by inverse fast wavelet transformation. Both computer simulation and static/dynamic experiments were carried out for typical flow regimes. Results obtained indicate that the proposed method can dramatically reduce the computational time and improve the quality of the reconstructed image with suitable decomposition levels compared with the single-resolution maximum likelihood expectation maximization (MLEM), alternating minimization (AM), Landweber, iterative least square technique (ILST) and minimum cross entropy (MCE) methods. Therefore, the MRMCE method is suitable for identification of dynamic two-phase flow regimes

  11. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  12. Aerial Imagery and LIDAR Data Fusion for Unambiguous Extraction of Adjacent Level-Buildings Footprints

    Science.gov (United States)

    Mola Ebrahimi, S.; Arefi, H.; Rasti Veis, H.

    2017-09-01

    Our paper aims to present a new approach to identify and extract building footprints using aerial images and LiDAR data. Employing an edge detector algorithm, our method first extracts the outer boundary of buildings, and then by taking advantage of Hough transform and extracting the boundary of connected buildings in a building block, it extracts building footprints located in each block. The proposed method first recognizes the predominant leading orientation of a building block using Hough transform, and then rotates the block according to the inverted complement of the dominant line's angle. Therefore the block poses horizontally. Afterwards, by use of another Hough transform, vertical lines, which might be the building boundaries of interest, are extracted and the final building footprints within a block are obtained. The proposed algorithm is implemented and tested on the urban area of Zeebruges, Belgium(IEEE Contest,2015). The areas of extracted footprints are compared to the corresponding areas in the reference data and mean error is equal to 7.43 m2. Besides, qualitative and quantitative evaluations suggest that the proposed algorithm leads to acceptable results in automated precise extraction of building footprints.

  13. Applications of wavelet transforms for nuclear power plant signal analysis

    International Nuclear Information System (INIS)

    Seker, S.; Turkcan, E.; Upadhyaya, B.R.; Erbay, A.S.

    1998-01-01

    The safety of Nuclear Power Plants (NPPs) may be enhanced by the timely processing of information derived from multiple process signals from NPPs. The most widely used technique in signal analysis applications is the Fourier transform in the frequency domain to generate power spectral densities (PSD). However, the Fourier transform is global in nature and will obscure any non-stationary signal feature. Lately, a powerful technique called the Wavelet Transform, has been developed. This transform uses certain basis functions for representing the data in an effective manner, with capability for sub-band analysis and providing time-frequency localization as needed. This paper presents a brief overview of wavelets applied to the nuclear industry for signal processing and plant monitoring. The basic theory of Wavelets is also summarized. In order to illustrate the application of wavelet transforms data were acquired from the operating nuclear power plant Borssele in the Netherlands. The experimental data consist of various signals in the power plant and are selected from a stationary power operation. Their frequency characteristics and the mutual relations were investigated using MATLAB signal processing and wavelet toolbox for computing their PSDs and coherence functions by multi-resolution analysis. The results indicate that the sub-band PSD matches with the original signal PSD and enhances the estimation of coherence functions. The Wavelet analysis demonstrates the feasibility of application to stationary signals to provide better estimates in the frequency band of interest as compared to the classical FFT approach. (author)

  14. Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments

    Science.gov (United States)

    Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.

    2017-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both

  15. Telescopic multi-resolution augmented reality

    Science.gov (United States)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  16. Accurate reconstruction in digital holographic microscopy using Fresnel dual-tree complex wavelet transform

    Science.gov (United States)

    Zhang, Xiaolei; Zhang, Xiangchao; Yuan, He; Zhang, Hao; Xu, Min

    2018-02-01

    Digital holography is a promising measurement method in the fields of bio-medicine and micro-electronics. But the captured images of digital holography are severely polluted by the speckle noise because of optical scattering and diffraction. Via analyzing the properties of Fresnel diffraction and the topographies of micro-structures, a novel reconstruction method based on the dual-tree complex wavelet transform (DT-CWT) is proposed. This algorithm is shiftinvariant and capable of obtaining sparse representations for the diffracted signals of salient features, thus it is well suited for multiresolution processing of the interferometric holograms of directional morphologies. An explicit representation of orthogonal Fresnel DT-CWT bases and a specific filtering method are developed. This method can effectively remove the speckle noise without destroying the salient features. Finally, the proposed reconstruction method is compared with the conventional Fresnel diffraction integration and Fresnel wavelet transform with compressive sensing methods to validate its remarkable superiority on the aspects of topography reconstruction and speckle removal.

  17. Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Zhao Rentao

    2014-06-01

    Full Text Available There is significant difference in the imaging features of infrared image and color image, but their fusion images also have very good complementary information. In this paper, based on the characteristics of infrared image and color image, first of all, wavelet transform is applied to the luminance component of the infrared image and color image. In multi resolution the relevant regional variance is regarded as the activity measure, relevant regional variance ratio as the matching measure, and the fusion image is enhanced in the process of integration, thus getting the fused images by final synthesis module and multi-resolution inverse transform. The experimental results show that the fusion image obtained by the method proposed in this paper is better than the other methods in keeping the useful information of the original infrared image and the color information of the original color image. In addition, the fusion image has stronger adaptability and better visual effect.

  18. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  19. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    Science.gov (United States)

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  20. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor

    2015-01-01

    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  1. A Biologically Motivated Multiresolution Approach to Contour Detection

    Directory of Open Access Journals (Sweden)

    Alessandro Neri

    2007-01-01

    Full Text Available Standard edge detectors react to all local luminance changes, irrespective of whether they are due to the contours of the objects represented in a scene or due to natural textures like grass, foliage, water, and so forth. Moreover, edges due to texture are often stronger than edges due to object contours. This implies that further processing is needed to discriminate object contours from texture edges. In this paper, we propose a biologically motivated multiresolution contour detection method using Bayesian denoising and a surround inhibition technique. Specifically, the proposed approach deploys computation of the gradient at different resolutions, followed by Bayesian denoising of the edge image. Then, a biologically motivated surround inhibition step is applied in order to suppress edges that are due to texture. We propose an improvement of the surround suppression used in previous works. Finally, a contour-oriented binarization algorithm is used, relying on the observation that object contours lead to long connected components rather than to short rods obtained from textures. Experimental results show that our contour detection method outperforms standard edge detectors as well as other methods that deploy inhibition.

  2. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    Directory of Open Access Journals (Sweden)

    Domingues M. O.

    2013-12-01

    Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.

  3. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    Science.gov (United States)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  4. The multi-resolution capability of Tchebichef moments and its applications to the analysis of fluorescence excitation-emission spectra

    Science.gov (United States)

    Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin

    2018-01-01

    Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra.

  5. Multiresolution Wavelet Analysis of Heartbeat Intervals Discriminates Healthy Patients from Those with Cardiac Pathology

    Science.gov (United States)

    Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.

    1998-02-01

    We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.

  6. A Fovea Localization Scheme Using Vessel Origin-Based Parabolic Model

    Directory of Open Access Journals (Sweden)

    Chun-Yuan Yu

    2014-09-01

    Full Text Available At the center of the macula, fovea plays an important role in computer-aided diagnosis. To locate the fovea, this paper proposes a vessel origin (VO-based parabolic model, which takes the VO as the vertex of the parabola-like vasculature. Image processing steps are applied to accurately locate the fovea on retinal images. Firstly, morphological gradient and the circular Hough transform are used to find the optic disc. The structure of the vessel is then segmented with the line detector. Based on the characteristics of the VO, four features of VO are extracted, following the Bayesian classification procedure. Once the VO is identified, the VO-based parabolic model will locate the fovea. To find the fittest parabola and the symmetry axis of the retinal vessel, an Shift and Rotation (SR-Hough transform that combines the Hough transform with the shift and rotation of coordinates is presented. Two public databases of retinal images, DRIVE and STARE, are used to evaluate the proposed method. The experiment results show that the average Euclidean distances between the located fovea and the fovea marked by experts in two databases are 9.8 pixels and 30.7 pixels, respectively. The results are stronger than other methods and thus provide a better macular detection for further disease discovery.

  7. Identifying Spatial Units of Human Occupation in the Brazilian Amazon Using Landsat and CBERS Multi-Resolution Imagery

    OpenAIRE

    Dal’Asta, Ana Paula; Brigatti, Newton; Amaral, Silvana; Escada, Maria Isabel Sobral; Monteiro, Antonio Miguel Vieira

    2012-01-01

    Every spatial unit of human occupation is part of a network structuring an extensive process of urbanization in the Amazon territory. Multi-resolution remote sensing data were used to identify and map human presence and activities in the Sustainable Forest District of Cuiabá-Santarém highway (BR-163), west of Pará, Brazil. The limits of spatial units of human occupation were mapped based on digital classification of Landsat-TM5 (Thematic Mapper 5) image (30m spatial resolution). High-spatial-...

  8. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    International Nuclear Information System (INIS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  9. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    Science.gov (United States)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  10. Collaborative Proposal: Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald

    2013-04-15

    Despite the great interest in regional modeling for both weather and climate applications, regional modeling is not yet at the stage that it can be used routinely and effectively for climate modeling of the ocean. The overarching goal of this project is to transform how climate models are used by developing and implementing a robust, efficient, and accurate global approach to regional ocean modeling. To achieve this goal, we will use theoretical and computational means to resolve several basic modeling and algorithmic issues. The first task is to develop techniques for transitioning between parameterized and high-fidelity regional ocean models as the discretization grid transitions from coarse to fine regions. The second task is to develop estimates for the error in scientifically relevant quantities of interest that provide a systematic way to automatically determine where refinement is needed in order to obtain accurate simulations of dynamic and tracer transport in regional ocean models. The third task is to develop efficient, accurate, and robust time-stepping schemes for variable spatial resolution discretizations used in regional ocean models of dynamics and tracer transport. The fourth task is to develop frequency-dependent eddy viscosity finite element and discontinuous Galerkin methods and study their performance and effectiveness for simulation of dynamics and tracer transport in regional ocean models. These four projects share common difficulties and will be approach using a common computational and mathematical toolbox. This is a multidisciplinary project involving faculty and postdocs from Colorado State University, Florida State University, and Penn State University along with scientists from Los Alamos National Laboratory. The completion of the tasks listed within the discussion of the four sub-projects will go a long way towards meeting our goal of developing superior regional ocean models that will transform how climate system models are used.

  11. Multiresolution molecular mechanics: Implementation and efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  12. SEGMENTATION AND QUALITY ANALYSIS OF LONG RANGE CAPTURED IRIS IMAGE

    Directory of Open Access Journals (Sweden)

    Anand Deshpande

    2016-05-01

    Full Text Available The iris segmentation plays a major role in an iris recognition system to increase the performance of the system. This paper proposes a novel method for segmentation of iris images to extract the iris part of long range captured eye image and an approach to select best iris frame from the iris polar image sequences by analyzing the quality of iris polar images. The quality of iris image is determined by the frequency components present in the iris polar images. The experiments are carried out on CASIA-long range captured iris image sequences. The proposed segmentation method is compared with Hough transform based segmentation and it has been determined that the proposed method gives higher accuracy for segmentation than Hough transform.

  13. Identifying Spatial Units of Human Occupation in the Brazilian Amazon Using Landsat and CBERS Multi-Resolution Imagery

    Directory of Open Access Journals (Sweden)

    Maria Isabel Sobral Escada

    2012-01-01

    Full Text Available Every spatial unit of human occupation is part of a network structuring an extensive process of urbanization in the Amazon territory. Multi-resolution remote sensing data were used to identify and map human presence and activities in the Sustainable Forest District of Cuiabá-Santarém highway (BR-163, west of Pará, Brazil. The limits of spatial units of human occupation were mapped based on digital classification of Landsat-TM5 (Thematic Mapper 5 image (30m spatial resolution. High-spatial-resolution CBERS-HRC (China-Brazil Earth Resources Satellite-High-Resolution Camera images (5 m merged with CBERS-CCD (Charge Coupled Device images (20 m were used to map spatial arrangements inside each populated unit, describing intra-urban characteristics. Fieldwork data validated and refined the classification maps that supported the categorization of the units. A total of 133 spatial units were individualized, comprising population centers as municipal seats, villages and communities, and units of human activities, such as sawmills, farmhouses, landing strips, etc. From the high-resolution analysis, 32 population centers were grouped in four categories, described according to their level of urbanization and spatial organization as: structured, recent, established and dependent on connectivity. This multi-resolution approach provided spatial information about the urbanization process and organization of the territory. It may be extended into other areas or be further used to devise a monitoring system, contributing to the discussion of public policy priorities for sustainable development in the Amazon.

  14. Reconstruction and calibration strategies for the LHCb RICH detector

    CERN Multimedia

    2006-01-01

    - LHCb particle identification - LHCb ring pattern recognition algorithm requirements - RICH pattern recognition - Cherenkov angle reconstruction online - Online PID - Hough transform - Metropolis- Hastings Markov chains - PID online: physics performances - Rich PID Callibration

  15. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    Science.gov (United States)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  16. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA

    Directory of Open Access Journals (Sweden)

    Wonhee Lee

    2014-02-01

    Full Text Available A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT. Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU.

  17. Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA

    Science.gov (United States)

    Lee, Wonhee; Park, Chan Gook

    2014-01-01

    A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA) with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT). Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU). PMID:24556675

  18. SEGMENTATION OF POLARIMETRIC SAR IMAGES USIG WAVELET TRANSFORMATION AND TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    A. Rezaeian

    2015-12-01

    Full Text Available Polarimetric Synthetic Aperture Radar (PolSAR sensors can collect useful observations from earth’s surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT. Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  19. Segmentation of Polarimetric SAR Images Usig Wavelet Transformation and Texture Features

    Science.gov (United States)

    Rezaeian, A.; Homayouni, S.; Safari, A.

    2015-12-01

    Polarimetric Synthetic Aperture Radar (PolSAR) sensors can collect useful observations from earth's surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR) are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT). Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM) and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  20. Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD

    Science.gov (United States)

    Favorskaya, M. N.; Savchina, E. I.

    2017-05-01

    Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate.

  1. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Science.gov (United States)

    Zhu, Aichun; Wang, Tian; Snoussi, Hichem

    2018-03-01

    This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  2. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Directory of Open Access Journals (Sweden)

    Aichun Zhu

    2018-03-01

    Full Text Available This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN. Firstly, a Relative Mixture Deformable Model (RMDM is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  3. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  4. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled

  5. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  6. Built-Up Area Detection from High-Resolution Satellite Images Using Multi-Scale Wavelet Transform and Local Spatial Statistics

    Science.gov (United States)

    Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z.

    2018-04-01

    Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection.

  7. Robust Circle Detection Using Harmony Search

    Directory of Open Access Journals (Sweden)

    Jaco Fourie

    2017-01-01

    Full Text Available Automatic circle detection is an important element of many image processing algorithms. Traditionally the Hough transform has been used to find circular objects in images but more modern approaches that make use of heuristic optimisation techniques have been developed. These are often used in large complex images where the presence of noise or limited computational resources make the Hough transform impractical. Previous research on the use of the Harmony Search (HS in circle detection showed that HS is an attractive alternative to many of the modern circle detectors based on heuristic optimisers like genetic algorithms and simulated annealing. We propose improvements to this work that enables our algorithm to robustly find multiple circles in larger data sets and still work on realistic images that are heavily corrupted by noisy edges.

  8. Automatic crop row detection from UAV images

    DEFF Research Database (Denmark)

    Midtiby, Henrik; Rasmussen, Jesper

    are considered weeds. We have used a Sugar beet field as a case for evaluating the proposed crop detection method. The suggested image processing consists of: 1) locating vegetation regions in the image by thresholding the excess green image derived from the orig- inal image, 2) calculate the Hough transform......Images from Unmanned Aerial Vehicles can provide information about the weed distribution in fields. A direct way is to quantify the amount of vegetation present in different areas of the field. The limitation of this approach is that it includes both crops and weeds in the reported num- bers. To get...... of the segmented image 3) determine the dominating crop row direction by analysing output from the Hough transform and 4) use the found crop row direction to locate crop rows....

  9. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  10. Comparison of effective Hough transform-based fingerprint alignment approaches

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-08-01

    Full Text Available points set with larger rotation and small number of points. The DRBA approach was found to perform better with minutiae points with large amount of translation, and the computational time was less than that of LMBA approach. However, the memory usage...

  11. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    Science.gov (United States)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  12. Using wavelet multi-resolution nature to accelerate the identification of fractional order system

    International Nuclear Information System (INIS)

    Li Yuan-Lu; Meng Xiao; Ding Ya-Qing

    2017-01-01

    Because of the fractional order derivatives, the identification of the fractional order system (FOS) is more complex than that of an integral order system (IOS). In order to avoid high time consumption in the system identification, the least-squares method is used to find other parameters by fixing the fractional derivative order. Hereafter, the optimal parameters of a system will be found by varying the derivative order in an interval. In addition, the operational matrix of the fractional order integration combined with the multi-resolution nature of a wavelet is used to accelerate the FOS identification, which is achieved by discarding wavelet coefficients of high-frequency components of input and output signals. In the end, the identifications of some known fractional order systems and an elastic torsion system are used to verify the proposed method. (paper)

  13. ROBUST MOTION SEGMENTATION FOR HIGH DEFINITION VIDEO SEQUENCES USING A FAST MULTI-RESOLUTION MOTION ESTIMATION BASED ON SPATIO-TEMPORAL TUBES

    OpenAIRE

    Brouard , Olivier; Delannay , Fabrice; Ricordel , Vincent; Barba , Dominique

    2007-01-01

    4 pages; International audience; Motion segmentation methods are effective for tracking video objects. However, objects segmentation methods based on motion need to know the global motion of the video in order to back-compensate it before computing the segmentation. In this paper, we propose a method which estimates the global motion of a High Definition (HD) video shot and then segments it using the remaining motion information. First, we develop a fast method for multi-resolution motion est...

  14. A fast multi-resolution approach to tomographic PIV

    Science.gov (United States)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  15. Analysis of computational complexity for HT-based fingerprint alignment algorithms on java card environment

    CSIR Research Space (South Africa)

    Mlambo, CS

    2015-01-01

    Full Text Available In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based...

  16. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet [Carnegie Institution for Science, Stanford, CA; Michalak, Anna M. [Carnegie Institution for Science, Stanford, CA; van Bloemen Waanders, Bart Gustaaf [Sandia National Laboratories, Albuquerque, NM; McKenna, Sean Andrew [IBM Research, Mulhuddart, Dublin 15, Ireland

    2013-04-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

  17. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    International Nuclear Information System (INIS)

    Cieri, D.

    2016-01-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate . Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach. (paper)

  18. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James

    2009-11-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  19. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James; Ho, Pin-Han; Shihada, Basem

    2009-01-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  20. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  1. Implementation of CT and IHT Processors for Invariant Object Recognition System

    Directory of Open Access Journals (Sweden)

    J. Turan jr.

    2004-12-01

    Full Text Available This paper presents PDL or ASIC implementation of key modules ofinvariant object recognition system based on the combination of theIncremental Hough transform (IHT, correlation and rapid transform(RT. The invariant object recognition system was represented partiallyin C++ language for general-purpose processor on personal computer andpartially described in VHDL code for implementation in PLD or ASIC.

  2. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Nasreddine Taleb

    2010-09-01

    Full Text Available Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT. An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  3. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.

    Science.gov (United States)

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  4. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    Science.gov (United States)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  5. Investigations of homologous disaccharides by elastic incoherent neutron scattering and wavelet multiresolution analysis

    Energy Technology Data Exchange (ETDEWEB)

    Magazù, S.; Migliardo, F. [Dipartimento di Fisica e di Scienze della Terra dell’, Università degli Studi di Messina, Viale F. S. D’Alcontres 31, 98166 Messina (Italy); Vertessy, B.G. [Institute of Enzymology, Hungarian Academy of Science, Budapest (Hungary); Caccamo, M.T., E-mail: maccamo@unime.it [Dipartimento di Fisica e di Scienze della Terra dell’, Università degli Studi di Messina, Viale F. S. D’Alcontres 31, 98166 Messina (Italy)

    2013-10-16

    Highlights: • Innovative multiresolution wavelet analysis of elastic incoherent neutron scattering. • Elastic Incoherent Neutron Scattering measurements on homologues disaccharides. • EINS wavevector analysis. • EINS temperature analysis. - Abstract: In the present paper the results of a wavevector and thermal analysis of Elastic Incoherent Neutron Scattering (EINS) data collected on water mixtures of three homologous disaccharides through a wavelet approach are reported. The wavelet analysis allows to compare both the spatial properties of the three systems in the wavevector range of Q = 0.27 Å{sup −1} ÷ 4.27 Å{sup −1}. It emerges that, differently from previous analyses, for trehalose the scalograms are constantly lower and sharper in respect to maltose and sucrose, giving rise to a global spectral density along the wavevector range markedly less extended. As far as the thermal analysis is concerned, the global scattered intensity profiles suggest a higher thermal restrain of trehalose in respect to the other two homologous disaccharides.

  6. Extraction of Nucleolus Candidate Zone in White Blood Cells of Peripheral Blood Smear Images Using Curvelet Transform

    Directory of Open Access Journals (Sweden)

    Ramin Soltanzadeh

    2012-01-01

    Full Text Available The main part of each white blood cell (WBC is its nucleus which contains chromosomes. Although white blood cells (WBCs with giant nuclei are the main symptom of leukemia, they are not sufficient to prove this disease and other symptoms must be investigated. For example another important symptom of leukemia is the existence of nucleolus in nucleus. The nucleus contains chromatin and a structure called the nucleolus. Chromatin is DNA in its active form while nucleolus is composed of protein and RNA, which are usually inactive. In this paper, to diagnose this symptom and in order to discriminate between nucleoli and chromatins, we employ curvelet transform, which is a multiresolution transform for detecting 2D singularities in images. For this reason, at first nuclei are extracted by means of K-means method, then curvelet transform is applied on extracted nuclei and the coefficients are modified, and finally reconstructed image is used to extract the candidate locations of chromatins and nucleoli. This method is applied on 100 microscopic images and succeeds with specificity of 80.2% and sensitivity of 84.3% to detect the nucleolus candidate zone. After nucleolus candidate zone detection, new features that can be used to classify atypical and blast cells such as gradient of saturation channel are extracted.

  7. Diaphragm motion quantification in megavoltage cone-beam CT projection images.

    Science.gov (United States)

    Chen, Mingqing; Siochi, R Alfredo

    2010-05-01

    To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.

  8. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  9. Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models.

    Science.gov (United States)

    Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng

    2016-09-01

    Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.

  10. Hexagonal wavelet processing of digital mammography

    Science.gov (United States)

    Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.

    1993-09-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.

  11. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    Science.gov (United States)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2018-04-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  12. Geometry-based populated chessboard recognition

    Science.gov (United States)

    Xie, Youye; Tang, Gongguo; Hoff, William

    2018-04-01

    Chessboards are commonly used to calibrate cameras, and many robust methods have been developed to recognize the unoccupied boards. However, when the chessboard is populated with chess pieces, such as during an actual game, the problem of recognizing the board is much harder. Challenges include occlusion caused by the chess pieces, the presence of outlier lines and low viewing angles of the chessboard. In this paper, we present a novel approach to address the above challenges and recognize the chessboard. The Canny edge detector and Hough transform are used to capture all possible lines in the scene. The k-means clustering and a k-nearest-neighbors inspired algorithm are applied to cluster and reject the outlier lines based on their Euclidean distances to the nearest neighbors in a scaled Hough transform space. Finally, based on prior knowledge of the chessboard structure, a geometric constraint is used to find the correspondences between image lines and the lines on the chessboard through the homography transformation. The proposed algorithm works for a wide range of the operating angles and achieves high accuracy in experiments.

  13. Resonance – Journal of Science Education | Indian Academy of ...

    Indian Academy of Sciences (India)

    ... Journals; Resonance – Journal of Science Education; Volume 9; Issue 11. Wavelet Transforms: Application to Data Analysis – I. Jatan K Modi Sachin P Nanavati Amit S Phadke Prasanta K Panigrahi ... Keywords. Discrete wavelet transform; multi-resolution analysis (MRA); translation; scaling; Daube-chies' basis sets.

  14. Laser guided automated calibrating system for accurate bracket ...

    African Journals Online (AJOL)

    It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...

  15. GPU-Vote: A Framework for Accelerating Voting Algorithms on GPU.

    NARCIS (Netherlands)

    Braak, van den G.J.W.; Nugteren, C.; Mesman, B.; Corporaal, H.; Kaklamanis, C.; Papatheodorou, T.; Spirakis, P.G.

    2012-01-01

    Voting algorithms, such as histogram and Hough transforms, are frequently used algorithms in various domains, such as statistics and image processing. Algorithms in these domains may be accelerated using GPUs. Implementing voting algorithms efficiently on a GPU however is far from trivial due to

  16. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  17. Implementing wavelet packet transform for valve failure detection using vibration and acoustic emission signals

    International Nuclear Information System (INIS)

    Sim, H Y; Ramli, R; Abdullah, M A K

    2012-01-01

    The efficiency of reciprocating compressors relies heavily on the health condition of its moving components, most importantly its valves. Previous studies showed good correlation between the dynamic response and the physical condition of the valves. These can be achieved by employing vibration technique which is capable of monitoring the response of the valve, and acoustic emission technique which is capable of detecting the valves' material deformation. However, the relationship/comparison between the two techniques is rarely investigated. In this paper, the two techniques were examined using time-frequency analysis. Wavelet packet transform (WPT) was chosen as the multi-resolution analysis technique over continuous wavelet transform (CWT), and discrete wavelet transform (DWT). This is because WPT could overcome the high computational time and high redundancy problem in CWT and could provide detailed analysis of the high frequency components compared to DWT. The features of both signals can be extracted by evaluating the normalised WPT coefficients for different time window under different valve conditions. By comparing the normalised coefficients over a certain time frame and frequency range, the feature vectors revealing the condition of valves can be constructed. One way analysis of variance was employed on these feature vectors to test the significance of data under different valve conditions. It is believed that AE signals can give a better representation of the valve condition as it can detect both the fluid motion and material deformation of valves as compared to the vibration signals.

  18. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.

    Science.gov (United States)

    Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek

    2014-09-21

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  19. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  20. Distributed Multiscale Data Analysis and Processing for Sensor Networks

    National Research Council Canada - National Science Library

    Wagner, Raymond; Sarvotham, Shriram; Choi, Hyeokho; Baraniuk, Richard

    2005-01-01

    .... Second, the communication overhead of multiscale algorithms can become prohibitive. In this paper, we take a first step in addressing both shortcomings by introducing two new distributed multiresolution transforms...

  1. Geological disaster survey based on Curvelet transform with borehole Ground Penetrating Radar in Tonglushan old mine site.

    Science.gov (United States)

    Tang, Xinjian; Sun, Tao; Tang, Zhijie; Zhou, Zenghui; Wei, Baoming

    2011-06-01

    Tonglushan old mine site located in Huangshi City, China, is very famous in the world. However, some of the ruins had suffered from geological disasters such as local deformation, surface cracking, in recent years. Structural abnormalities of rock-mass in deep underground were surveyed with borehole ground penetrating radar (GPR) to find out whether there were any mined galleries or mined-out areas below the ruins. With both the multiresolution analysis and sub-band directional of Curvelet transform, the feature information of targets' GPR signals were studied on Curvelet transform domain. Heterogeneity of geotechnical media and clutter jamming of complicated background of GPR signals could be conquered well, and the singularity characteristic information of typical rock mass signals could be extracted. Random noise had be removed by thresholding combined with Curvelet and the statistical characteristics of wanted signals and the noise, then direct wave suppression and the spatial distribution feature extraction could obtain a better result by making use of Curvelet transform directional. GprMax numerical modeling and analyzing of the sample data have verified the feasibility and effectiveness of our method. It is important and applicable for the analyzing of the geological structure and the disaster development about the Tonglushan old mine site. Copyright © 2011 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  2. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    Energy Technology Data Exchange (ETDEWEB)

    Tsantis, Stavros [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Spiliopoulos, Stavros; Karnabatidis, Dimitrios [Department of Radiology, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Skouroliakou, Aikaterini [Department of Energy Technology Engineering, Technological Education Institute of Athens, Athens 12210 (Greece); Hazle, John D. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Kagadis, George C., E-mail: gkagad@gmail.com, E-mail: George.Kagadis@med.upatras.gr, E-mail: GKagadis@mdanderson.org [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504, Greece and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2014-07-15

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A

  3. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    International Nuclear Information System (INIS)

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C.

    2014-01-01

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A

  4. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  5. Automatic multiresolution age-related macular degeneration detection from fundus images

    Science.gov (United States)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  6. The use of wavelet transforms in the solution of two-phase flow problems

    International Nuclear Information System (INIS)

    Moridis, G.J.; Nikolaou, M.; You, Yong

    1994-10-01

    In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts

  7. LICENSE PLATE LOCALIZATION USING GABOR FILTERS AND NEURAL NETWORKS

    OpenAIRE

    Sami Ktata; Faouzi Benzarti; Hamid Amiri

    2013-01-01

    Vehicle License Plate Detection (LPD) is an important step for the vehicle plate recognition which can be used in the intelligent transport systems. Many methods have been proposed for the detection of license plates based on: Mathematical morphology, Discrete Wavelet Transform, Hough Transform and others. In general, an LPR system includes four main parts: Vehicle image acquisition, license plate detection, character segmentation and character recognition. In this study, we present a robust ...

  8. A one-time truncate and encode multiresolution stochastic framework

    Energy Technology Data Exchange (ETDEWEB)

    Abgrall, R.; Congedo, P.M.; Geraci, G., E-mail: gianluca.geraci@inria.fr

    2014-01-15

    In this work a novel adaptive strategy for stochastic problems, inspired from the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

  9. Developing a real-time emulation of multiresolutional control architectures for complex, discrete-event systems

    Energy Technology Data Exchange (ETDEWEB)

    Davis, W.J.; Macro, J.G.; Brook, A.L. [Univ. of Illinois, Urbana, IL (United States)] [and others

    1996-12-31

    This paper first discusses an object-oriented, control architecture and then applies the architecture to produce a real-time software emulator for the Rapid Acquisition of Manufactured Parts (RAMP) flexible manufacturing system (FMS). In specifying the control architecture, the coordinated object is first defined as the primary modeling element. These coordinated objects are then integrated into a Recursive, Object-Oriented Coordination Hierarchy. A new simulation methodology, the Hierarchical Object-Oriented Programmable Logic Simulator, is then employed to model the interactions among the coordinated objects. The final step in implementing the emulator is to distribute the models of the coordinated objects over a network of computers and to synchronize their operation to a real-time clock. The paper then introduces the Hierarchical Subsystem Controller as an intelligent controller for the coordinated object. The proposed approach to intelligent control is then compared to the concept of multiresolutional semiosis that has been developed by Dr. Alex Meystel. Finally, the plans for implementing an intelligent controller for the RAMP FMS are discussed.

  10. Wavelet processing techniques for digital mammography

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu

    1992-09-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  11. Multiresolution signal decomposition schemes. Part 2: Morphological wavelets

    NARCIS (Netherlands)

    H.J.A.M. Heijmans (Henk); J. Goutsias (John)

    1999-01-01

    htmlabstractIn its original form, the wavelet transform is a linear tool. However, it has been increasingly recognized that nonlinear extensions are possible. A major impulse to the development of nonlinear wavelet transforms has been given by the introduction of the lifting scheme by Sweldens. The

  12. Cherry Picking Robot Vision Recognition System Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Zhang Qi Rong

    2016-01-01

    Full Text Available Through OpenCV function, the cherry in a natural environment image after image preprocessing, color recognition, threshold segmentation, morphological filtering, edge detection, circle Hough transform, you can draw the cherry’s center and circular contour, to carry out the purpose of the machine picking. The system is simple and effective.

  13. Identification and red blood cell automated counting from blood smear images using computer-aided system.

    Science.gov (United States)

    Acharya, Vasundhara; Kumar, Preetham

    2018-03-01

    Red blood cell count plays a vital role in identifying the overall health of the patient. Hospitals use the hemocytometer to count the blood cells. Conventional method of placing the smear under microscope and counting the cells manually lead to erroneous results, and medical laboratory technicians are put under stress. A computer-aided system will help to attain precise results in less amount of time. This research work proposes an image-processing technique for counting the number of red blood cells. It aims to examine and process the blood smear image, in order to support the counting of red blood cells and identify the number of normal and abnormal cells in the image automatically. K-medoids algorithm which is robust to external noise is used to extract the WBCs from the image. Granulometric analysis is used to separate the red blood cells from the white blood cells. The red blood cells obtained are counted using the labeling algorithm and circular Hough transform. The radius range for the circle-drawing algorithm is estimated by computing the distance of the pixels from the boundary which automates the entire algorithm. A comparison is done between the counts obtained using the labeling algorithm and circular Hough transform. Results of the work showed that circular Hough transform was more accurate in counting the red blood cells than the labeling algorithm as it was successful in identifying even the overlapping cells. The work also intends to compare the results of cell count done using the proposed methodology and manual approach. The work is designed to address all the drawbacks of the previous research work. The research work can be extended to extract various texture and shape features of abnormal cells identified so that diseases like anemia of inflammation and chronic disease can be detected at the earliest.

  14. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    Science.gov (United States)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  15. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    Science.gov (United States)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  16. GPU Enhancement of the Trigger to Extend Physics Reach at the LHC

    CERN Document Server

    Lujan, P.; Hunt, A.; Jindal, P.; LeGresley, P.

    2014-01-01

    At the Large Hadron Collider (LHC), the trigger systems for the detectors must be able to process a very large amount of data in a very limited amount of time, so that the nominal collision rate of 40 MHz can be reduced to a data rate that can be stored and processed in a reasonable amount of time. This need for high performance places very stringent requirements on the complexity of the algorithms that can be used for identifying events of interest in the trigger system, which potentially limits the ability to trigger on signatures of various new physics models. In this paper, we present an alternative tracking algorithm, based on the Hough transform, which avoids many of the problems associated with the standard combinatorial track finding currently used. The Hough transform is also well-adapted for Graphics Processing Unit (GPU)-based computing, and such GPU-based systems could be easily integrated into the existing High-Level Trigger (HLT). This algorithm offers the ability to trigger on topological signa...

  17. Fast pattern recognition with the ATLAS L1Track trigger for HL-LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00530554; The ATLAS collaboration

    2017-01-01

    A fast hardware based track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider. The goal is to achieve trigger levels in the high pile-up conditions of the High Luminosity Large Hadron Collider that are similar or better than those achieved at low pile-up conditions by adding tracking information to the ATLAS hardware trigger. A method for fast pattern recognition using the Hough transform is investigated. In this method, detector hits are mapped onto a 2D parameter space with one parameter related to the transverse momentum and one to the initial track direction. The performance of the Hough transform is studied at different pile-up values. It is also compared, using full event simulation of events with average pile-up of 200, with a method based on matching detector hits to pattern banks of simulated tracks stored in a custom made Associative Memory ASICs. The pattern recognition is followed by a track fitting step which calculates the track parameters. The spee...

  18. Fast pattern recognition with the ATLAS L1 track trigger for the HL-LHC

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2016-01-01

    A fast hardware based track trigger for high luminosity upgrade of the Large Hadron Collider (HL- LHC) is being developed in ATLAS. The goal is to achieve trigger levels in high pileup collisions that are similar or even better than those achieved at low pile-up running of LHC by adding tracking information to the ATLAS hardware trigger which is currently based on information from calorimeters and muon trigger chambers only. Two methods for fast pattern recognition are investigated. The first is based on matching tracker hits to pattern banks of simulated high momentum tracks which are stored in a custom made Associative Memory (AM) ASIC. The second is based on the Hough transform where detector hits are transformed into 2D Hough space with one variable related to track pt and one to track direction. Hits found by pattern recognition will be sent to a track fitting step which calculates the track parameters . The speed and precision of the track fitting depends on the quality of the hits selected by the patte...

  19. Sign language indexation within the MPEG-7 framework

    Science.gov (United States)

    Zaharia, Titus; Preda, Marius; Preteux, Francoise J.

    1999-06-01

    In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.

  20. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  1. Method of center localization for objects containing concentric arcs

    Science.gov (United States)

    Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.

    2015-02-01

    This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.

  2. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  3. Fusion of perceptions for perceptual robotics

    NARCIS (Netherlands)

    Ciftcioglu, O.; Bittermann, M.S.; Sariyildiz, I.S.

    2006-01-01

    Fusion of perception information for perceptual robotics is described. The visual perception is mathematically modelled as a probabilistic process obtaining and interpreting visual data from an environment. The visual data is processed in a multiresolutional form via wavelet transform and optimally

  4. Mammographic feature enhancement by multiscale analysis

    International Nuclear Information System (INIS)

    Laine, A.F.; Schuler, S.; Fan, J.; Huda, W.

    1994-01-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis by overcomplete multiresolution representations. The authors show that efficient representations may be identified within a continuum of scale-space and used to enhance features of importance to mammography. Methods of contrast enhancement are described based on three overcomplete multiscale representations: (1) the dyadic wavelet transform (separable), (2) the var-phi-transform (nonseparable, nonorthogonal), and (3) the hexagonal wavelet transform (nonseparable). Multiscale edges identified within distinct levels of transform space provide local support for image enhancement. Mammograms are reconstructed from wavelet coefficients modified at one or more levels by local and global nonlinear operators. In each case, edges and gain parameters are identified adaptively by a measure of energy within each level of scale-space. The authors show quantitatively that transform coefficients, modified by adaptive nonlinear operators, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. The results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. The authors demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology, they can improve chances of early detection while requiring less time to evaluate mammograms for most patients

  5. Multiresolution analysis of the spatiotemporal variability in global radiation observed by a dense network of 99 pyranometers

    Science.gov (United States)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas

    2017-03-01

    The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken

  6. On the use of adaptive multiresolution method with time-varying tolerance for compressible fluid flows

    Science.gov (United States)

    Soni, V.; Hadjadj, A.; Roussel, O.

    2017-12-01

    In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.

  7. Detección automática de NEOs en imágenes CCD utilizando la transformada de Hough

    Science.gov (United States)

    Ruétalo, M.; Tancredi, G.

    El interés y la dedicación por los objetos que se acercan a la órbita de la Tierra (NEOs) ha aumentado considerablemente en los últimos años, tanto que se han iniciado varias campañas de búsqueda sistemática para aumentar la población identificada de éstos. El uso de placas fotográficas e identificación visual está siendo sustituído, progresivamente, por el uso de cámaras CCD y paquetes de detección automática de los objetos en las imágenes digitales. Una parte muy importante para la implementación exitosa de un programa automatizado de detección de este tipo es el desarrollo de algoritmos capaces de identificar objetos de baja relación señal-ruido y con requerimientos computacionales no elevados. En el presente trabajo proponemos la utilización de la transformada de Hough (utilizada en algunas áreas de visión artificial) para detectar automáticamente trazas, aproximadamente rectilíneas y de baja relación señal-ruido, en imágenes CCD. Desarrollamos una primera implementación de un algoritmo basado en ésta y lo probamos con una serie de imágenes reales conteniendo trazas con picos de señales de entre ~1 σ y ~3 σ por encima del nivel del ruido de fondo. El algoritmo detecta, sin inconvenientes, la mayoría de los casos y en tiempos razonablemente adecuados.

  8. Visualization of a Turbulent Jet Using Wavelets

    Institute of Scientific and Technical Information of China (English)

    Hui LI

    2001-01-01

    An application of multiresolution image analysis to turbulence was investigated in this paper, in order to visualize the coherent structure and the most essential scales governing turbulence. The digital imaging photograph of jet slice was decomposed by two-dimensional discrete wavelet transform based on Daubechies, Coifman and Baylkin bases. The best choice of orthogonal wavelet basis for analyzing the image of the turbulent structures was first discussed. It is found that these orthonormal wavelet families with index N<10 were inappropriate for multiresolution image analysis of turbulent flow. The multiresolution images of turbulent structures were very similar when using the wavelet basis with the higher index number, even though wavelet bases are different functions. From the image components in orthogonal wavelet spaces with different scales, the further evident of the multi-scale structures in jet can be observed, and the edges of the vortices at different resolutions or scales and the coherent structure can be easily extracted.

  9. Multiscale wavelet representations for mammographic feature analysis

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu

    1992-12-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  10. A web-mapping system for real-time visualization of the global terrain

    Science.gov (United States)

    Zhang, Liqiang; Yang, Chongjun; Liu, Donglin; Ren, Yingchao; Rui, Xiaoping

    2005-04-01

    In this paper, we mainly present a web-based 3D global terrain visualization application that provides more powerful transmission and visualization of global multiresolution data sets across networks. A client/server architecture is put forward. The paper also reports various relevant research work, such as efficient data compression methods to reduce the physical size of these data sets and accelerate network delivery, streaming transmission for progressively downloading data, and real-time multiresolution terrain surface visualization with a high visual quality by M-band wavelet transforms and a hierarchical triangulation technique. Finally, an experiment is performed using different levels of detailed data to verify that the system works appropriately.

  11. TecLines: A MATLAB-Based Toolbox for Tectonic Lineament Analysis from Satellite Images and DEMs, Part 2: Line Segments Linking and Merging

    Directory of Open Access Journals (Sweden)

    Mehdi Rahnama

    2014-11-01

    Full Text Available Extraction and interpretation of tectonic lineaments is one of the routines for mapping large areas using remote sensing data. However, this is a subjective and time-consuming process. It is difficult to choose an optimal lineament extraction method in order to reduce subjectivity and obtain vectors similar to what an analyst would manually extract. The objective of this study is the implementation, evaluation and comparison of Hough transform, segment merging and polynomial fitting methods towards automated tectonic lineament mapping. For this purpose we developed a new MATLAB-based toolbox (TecLines. The proposed toolbox capabilities were validated using a synthetic Digital Elevation Model (DEM and tested along in the Andarab fault zone (Afghanistan where specific fault structures are known. In this study, we used filters in both frequency and spatial domains and the tensor voting framework to produce binary edge maps. We used the Hough transform to extract linear image discontinuities. We used B-spline as a polynomial curve fitting method to eliminate artificial line segments that are out of interest and to link discontinuous segments with similar trends. We performed statistical analyses in order to compare the final image discontinuities maps with existing references map.

  12. Automatic Lumbar Vertebrae Segmentation in Fluoroscopic Images Via Optimised Concurrent Hough Transform

    National Research Council Canada - National Science Library

    Zheng, Yalin

    2001-01-01

    .... Digital videofluoroscopy (DVF) was widely used to obtain images for motion studies. This can provide motion sequences of the lumbar spine, but the images obtained often suffer due to noise, exacerbated by the very low radiation dosage...

  13. The application of Hough transform-based fingerprint alignment on match-on-card

    CSIR Research Space (South Africa)

    Mlambo, S

    2015-03-01

    Full Text Available of these cards, has led to the need for further improvements on smart cards combined with fingerprint biometrics. Due to the insufficient memory space and few instruction sets in Java smart cards, developers and programmers are faced with implementing efficient...

  14. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    Science.gov (United States)

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  15. A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifier.

    Science.gov (United States)

    Coustaty, M; Bertet, K; Visani, M; Ogier, J

    2011-08-01

    In this paper, we propose a new approach for symbol recognition using structural signatures and a Galois lattice as a classifier. The structural signatures are based on topological graphs computed from segments which are extracted from the symbol images by using an adapted Hough transform. These structural signatures-that can be seen as dynamic paths which carry high-level information-are robust toward various transformations. They are classified by using a Galois lattice as a classifier. The performance of the proposed approach is evaluated based on the GREC'03 symbol database, and the experimental results we obtain are encouraging.

  16. Geometric shapes inversion method of space targets by ISAR image segmentation

    Science.gov (United States)

    Huo, Chao-ying; Xing, Xiao-yu; Yin, Hong-cheng; Li, Chen-guang; Zeng, Xiang-yun; Xu, Gao-gui

    2017-11-01

    The geometric shape of target is an effective characteristic in the process of space targets recognition. This paper proposed a method of shape inversion of space target based on components segmentation from ISAR image. The Radon transformation, Hough transformation, K-means clustering, triangulation will be introduced into ISAR image processing. Firstly, we use Radon transformation and edge detection to extract space target's main body spindle and solar panel spindle from ISAR image. Then the targets' main body, solar panel, rectangular and circular antenna are segmented from ISAR image based on image detection theory. Finally, the sizes of every structural component are computed. The effectiveness of this method is verified using typical targets' simulation data.

  17. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    Science.gov (United States)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  18. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    Science.gov (United States)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  19. A Computer Vision Approach to Identify Einstein Rings and Arcs

    Science.gov (United States)

    Lee, Chien-Hsiu

    2017-03-01

    Einstein rings are rare gems of strong lensing phenomena; the ring images can be used to probe the underlying lens gravitational potential at every position angles, tightly constraining the lens mass profile. In addition, the magnified images also enable us to probe high-z galaxies with enhanced resolution and signal-to-noise ratios. However, only a handful of Einstein rings have been reported, either from serendipitous discoveries or or visual inspections of hundred thousands of massive galaxies or galaxy clusters. In the era of large sky surveys, an automated approach to identify ring pattern in the big data to come is in high demand. Here, we present an Einstein ring recognition approach based on computer vision techniques. The workhorse is the circle Hough transform that recognise circular patterns or arcs in the images. We propose a two-tier approach by first pre-selecting massive galaxies associated with multiple blue objects as possible lens, than use Hough transform to identify circular pattern. As a proof-of-concept, we apply our approach to SDSS, with a high completeness, albeit with low purity. We also apply our approach to other lenses in DES, HSC-SSP, and UltraVISTA survey, illustrating the versatility of our approach.

  20. Efficient iris texture analysis method based on Gabor ordinal measures

    Science.gov (United States)

    Tajouri, Imen; Aydi, Walid; Ghorbel, Ahmed; Masmoudi, Nouri

    2017-07-01

    With the remarkably increasing interest directed to the security dimension, the iris recognition process is considered to stand as one of the most versatile technique critically useful for the biometric identification and authentication process. This is mainly due to every individual's unique iris texture. A modestly conceived efficient approach relevant to the feature extraction process is proposed. In the first place, iris zigzag "collarette" is extracted from the rest of the image by means of the circular Hough transform, as it includes the most significant regions lying in the iris texture. In the second place, the linear Hough transform is used for the eyelids' detection purpose while the median filter is applied for the eyelashes' removal. Then, a special technique combining the richness of Gabor features and the compactness of ordinal measures is implemented for the feature extraction process, so that a discriminative feature representation for every individual can be achieved. Subsequently, the modified Hamming distance is used for the matching process. Indeed, the advanced procedure turns out to be reliable, as compared to some of the state-of-the-art approaches, with a recognition rate of 99.98%, 98.12%, and 95.02% on CASIAV1.0, CASIAV3.0, and IIT Delhi V1 iris databases, respectively.

  1. A scalable multi-resolution spatio-temporal model for brain activation and connectivity in fMRI data

    KAUST Repository

    Castruccio, Stefano

    2018-01-23

    Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. Modeling spatial dependence of imaging data at different spatial scales is one of the main challenges of contemporary neuroimaging, and it could allow for accurate testing for significance in neural activity. The high dimensionality of this type of data (on the order of hundreds of thousands of voxels) poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs)—coarser or larger spatial units—rather than among voxels. However, ignoring spatial dependence at different scales could drastically reduce our ability to detect activation patterns in the brain and hence produce misleading results. We introduce a multi-resolution spatio-temporal model and a computationally efficient methodology to estimate cognitive control related activation and whole-brain connectivity. The proposed model allows for testing voxel-specific activation while accounting for non-stationary local spatial dependence within anatomically defined ROIs, as well as regional dependence (between-ROIs). The model is used in a motor-task fMRI study to investigate brain activation and connectivity patterns aimed at identifying associations between these patterns and regaining motor functionality following a stroke.

  2. Discrete wavelet transform-based denoising technique for advanced state-of-charge estimator of a lithium-ion battery in electric vehicles

    International Nuclear Information System (INIS)

    Lee, Seongjun; Kim, Jonghoon

    2015-01-01

    Sophisticated data of the experimental DCV (discharging/charging voltage) of a lithium-ion battery is required for high-accuracy SOC (state-of-charge) estimation algorithms based on the state-space ECM (electrical circuit model) in BMSs (battery management systems). However, when sensing noisy DCV signals, erroneous SOC estimation (which results in low BMS performance) is inevitable. Therefore, this manuscript describes the design and implementation of a DWT (discrete wavelet transform)-based denoising technique for DCV signals. The steps for denoising a noisy DCV measurement in the proposed approach are as follows. First, using MRA (multi-resolution analysis), the noise-riding DCV signal is decomposed into different frequency sub-bands (low- and high-frequency components, A n and D n ). Specifically, signal processing of the high frequency component D n that focuses on a short-time interval is necessary to reduce noise in the DCV measurement. Second, a hard-thresholding-based denoising rule is applied to adjust the wavelet coefficients of the DWT to achieve a clear separation between the signal and the noise. Third, the desired de-noised DCV signal is reconstructed by taking the IDWT (inverse discrete wavelet transform) of the filtered detailed coefficients. Finally, this signal is sent to the ECM-based SOC estimation algorithm using an EKF (extended Kalman filter). Experimental results indicate the robustness of the proposed approach for reliable SOC estimation. - Highlights: • Sophisticated data of the experimental DCV is required for high-accuracy SOC. • DWT (discrete wavelet transform)-based denoising technique is newly investigated. • Three steps for denoising a noisy DCV measurement in this work are implemented. • Experimental results indicate the robustness of the proposed work for reliable SOC

  3. Kamera-basierte Erkennung von Geschwindigkeitsbeschränkungen auf deutschen Straen

    Science.gov (United States)

    Nienhüser, Dennis; Ziegenmeyer, Marco; Gumpp, Thomas; Scholl, Kay-Ulrich; Zöllner, J. Marius; Dillmann, Rüdiger

    An Fahrerassistenzsysteme im industriellen Einsatz werden hohe Anforderungen bezüglich Zuverlässigkeit und Robustheit gestellt. In dieser Arbeit wird die Kombination robuster Verfahren wie der Hough-Transformation und Support-Vektor-Maschinen zu einem Gesamtsystem zur Erkennung von Geschwindigkeitsbeschränkungen beschrieben. Es setzt eine Farbvideokamera als Sensorik ein. Die Evaluation auf Testdaten bestätigt durch die ermittelte hohe Korrektklassifikationsrate bei gleichzeitig geringer Zahl Fehlalarme die Zuverlässigkeit des Systems.

  4. TecLines: A MATLAB-Based Toolbox for Tectonic Lineament Analysis from Satellite Images and DEMs, Part 2: Line Segments Linking and Merging

    OpenAIRE

    Mehdi Rahnama; Richard Gloaguen

    2014-01-01

    Extraction and interpretation of tectonic lineaments is one of the routines for mapping large areas using remote sensing data. However, this is a subjective and time-consuming process. It is difficult to choose an optimal lineament extraction method in order to reduce subjectivity and obtain vectors similar to what an analyst would manually extract. The objective of this study is the implementation, evaluation and comparison of Hough transform, segment merging and polynomial fitting methods t...

  5. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Daehyeok Kim

    2017-06-01

    Full Text Available In this paper, we present a multi-resolution mode CMOS image sensor (CIS for intelligent surveillance system (ISS applications. A low column fixed-pattern noise (CFPN comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution with supply voltages of 3.3 V (analog and 1.8 V (digital and 14 frame/s of frame rates.

  6. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    Science.gov (United States)

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  7. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    Science.gov (United States)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  8. Stratifying FIA Ground Plots Using A 3-Year Old MRLC Forest Cover Map and Current TM Derived Variables Selected By "Decision Tree" Classification

    Science.gov (United States)

    Michael Hoppus; Stan Arner; Andrew Lister

    2001-01-01

    A reduction in variance for estimates of forest area and volume in the state of Connecticut was accomplished by stratifying FIA ground plots using raw, transformed and classified Landsat Thematic Mapper (TM) imagery. A US Geological Survey (USGS) Multi-Resolution Landscape Characterization (MRLC) vegetation cover map for Connecticut was used to produce a forest/non-...

  9. Road Extraction and Car Detection from Aerial Image Using Intensity and Color

    Directory of Open Access Journals (Sweden)

    Vahid Ghods

    2011-07-01

    Full Text Available In this paper a new automatic approach to road extraction from aerial images is proposed. The initialization strategies are based on the intensity, color, and Hough transform. After road elements extraction, chain codes are calculated. In the last step, using shadow, cars on the roads are detected. We implemented our method on the 25 images from "Google Earth" database. The experiments show an increase in both the completeness and the quality indexes for the extracted road.

  10. Métodos de segmentación de nubes en imágenes satelitales

    Directory of Open Access Journals (Sweden)

    Diego Fernando Rocha Arango

    2013-05-01

    Full Text Available The goal of this paper is to show the results of implementing two cloud segmentation techniques on GOES (satellite images. The first technique is based on regions and gray-level thresholding. The second technique is based on boundaries and the Hough transform. Finally, the results of our experiments (with the two segmentation methods are compared with results from segmented imaging obtained from specialized software packages, where spectral band separation is applied using the information interest.

  11. Identification of Low Momentum Electrons in The Time Projection Chamber of The ALICE Detector.

    CERN Document Server

    Mwewa, Chilufya

    2013-01-01

    This paper presents results obtained in the study to identify noisy low momentum electrons in the Time Projection Chamber (TPC) of the ALICE detector. To do this, the Circle Hough Transform is employed under the openCV library in python programming. This is tested on simulated tracks in the transverse view of the TPC. It is found that the noisy low momentum electrons can be identified and their exact positions in the transverse plane can be obtained.

  12. Proceedings of the Image Understanding Workshop (18th) Held in Cambridge, Massachusetts on 6-8 April 1988. Volume 1

    Science.gov (United States)

    1988-04-01

    types of image information, the Hough transform, and ( BPP ) was upgraded with the floating point platform texture modeling [Aloimonos et al. 1987...Aloimonos , kit, to provide much faster scientific computation. Our and Swain 1987a; 1987b; Bandopadhay 1986; 3-node BPP was upgraded to a Butterfly PlusO...September, 1976. There are many avenues for future work. First and foremost we [2) Fau , P., and Hanson, A.J. n hp are investigating the use of an

  13. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Luz, R. J. Barreira; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D' Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; Mauro, G. De; Neto, J. R. T. de Mello; Mitri, I. De; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Giulio, C. Di; Matteo, A. Di; Castro, M. L. Díaz; Diogo, F.; Dobrigkeit, C.; D' Olivo, J. C.; Anjos, R. C. dos; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Berisso, M. Gómez; Vitale, P. F. Gómez; González, N.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Mezek, G. Kukec; Kunka, N.; Awad, A. Kuotb; LaHurd, D.; Lauscher, M.; Legumina, R.; de Oliveira, M. A. Leigui; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; Casado, A. López; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Bravo, O. Martínez; Meza, J. J. Masías; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Selmi-Dei, D. Pakk; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; de Carvalho, W. Rodrigues; Fernandez, G. Rodriguez; Rojo, J. Rodriguez; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Durán, M. Suarez; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Peixoto, C. J. Todero; Tomankova, L.; Tomé, B.; Elipe, G. Torralba; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Galicia, J. F. Valdés; Valiño, I.; Valore, L.; Aar, G. van; Bodegom, P. van; Berg, A. M. van den; Vliet, A. van; Varela, E.; Cárdenas, B. Vargas; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Quispe, I. D. Vergara; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.

    2017-06-01

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80(o) and energies in excess of 4 EeV (4 × 1018 eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured, while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p-values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10-5 in the case of the angular power spectrum, and 2.5 × 10-3 in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.

  14. Multi-resolution anisotropy studies of ultrahigh-energy cosmic rays detected at the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P.; Andringa, S. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL (Portugal); Aglietta, M. [Osservatorio Astrofisico di Torino (INAF), Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3 (France); Albuquerque, I.F.M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET) (Argentina); Almela, A.; Andrada, B. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela (Spain); Anastasi, G.A. [Gran Sasso Science Institute (INFN), L' Aquila (Italy); Anchordoqui, L., E-mail: auger_spokespersons@fnal.gov [Department of Physics and Astronomy, Lehman College, City University of New York (United States); and others

    2017-06-01

    We report a multi-resolution search for anisotropies in the arrival directions of cosmic rays detected at the Pierre Auger Observatory with local zenith angles up to 80{sup o} and energies in excess of 4 EeV (4 × 10{sup 18} eV). This search is conducted by measuring the angular power spectrum and performing a needlet wavelet analysis in two independent energy ranges. Both analyses are complementary since the angular power spectrum achieves a better performance in identifying large-scale patterns while the needlet wavelet analysis, considering the parameters used in this work, presents a higher efficiency in detecting smaller-scale anisotropies, potentially providing directional information on any observed anisotropies. No deviation from isotropy is observed on any angular scale in the energy range between 4 and 8 EeV. Above 8 EeV, an indication for a dipole moment is captured; while no other deviation from isotropy is observed for moments beyond the dipole one. The corresponding p -values obtained after accounting for searches blindly performed at several angular scales, are 1.3 × 10{sup −5} in the case of the angular power spectrum, and 2.5 × 10{sup −3} in the case of the needlet analysis. While these results are consistent with previous reports making use of the same data set, they provide extensions of the previous works through the thorough scans of the angular scales.

  15. Multiresolution wavelet-ANN model for significant wave height forecasting.

    Digital Repository Service at National Institute of Oceanography (India)

    Deka, P.C.; Mandal, S.; Prahlada, R.

    Hybrid wavelet artificial neural network (WLNN) has been applied in the present study to forecast significant wave heights (Hs). Here Discrete Wavelet Transformation is used to preprocess the time series data (Hs) prior to Artificial Neural Network...

  16. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    Directory of Open Access Journals (Sweden)

    Il Jae Lee

    2009-09-01

    Full Text Available In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor.

  17. Using discrete wavelet transform features to discriminate between noise and phases in seismic waveforms

    Science.gov (United States)

    Forrest, R.; Ray, J.; Hansen, C. W.

    2017-12-01

    Currently, simple polarization metrics such as the horizontal-to-vertical ratio are used to discriminate between noise and various phases in three-component seismic waveform data collected at regional distances. Accurately establishing the identity and arrival of these waves in adverse signal-to-noise environments is helpful in detecting and locating the seismic events. In this work, we explore the use of multiresolution decompositions to discriminate between noise and event arrivals. A segment of the waveform lying inside a time-window that spans the coda of an arrival is subjected to a discrete wavelet decomposition. Multi-resolution classification features as well as statistical tests are derived from these wavelet decomposition quantities to quantify their discriminating power. Furthermore, we move to streaming data and address the problem of false positives by introducing ensembles of classifiers. We describe in detail results of these methods tuned from data obtained from Coronel Fontana, Argentina (CFAA), as well as Stephens Creek, Australia (STKA). Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  18. Design and Implementation of Pointer-Type Multi Meters Intelligent Recognition Device Based on ARM Platform

    Science.gov (United States)

    Cui, Yang; Luo, Wang; Fan, Qiang; Peng, Qiwei; Cai, Yiting; Yao, Yiyang; Xu, Changfu

    2018-01-01

    This paper adopts a low power consumption ARM Hisilicon mobile processing platform and OV4689 camera, combined with a new skeleton extraction based on distance transform algorithm and the improved Hough algorithm for multi meters real-time reading. The design and implementation of the device were completed. Experimental results shows that The average error of measurement was 0.005MPa, and the average reading time was 5s. The device had good stability and high accuracy which meets the needs of practical application.

  19. A novel iris localization algorithm using correlation filtering

    Science.gov (United States)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  20. Use of high resolution satellite images for tracking of changes in the lineament structure, caused by earthquakes

    OpenAIRE

    Arellano-Baeza, A. A.; Garcia, R. V.; Trejo-Soto, M.

    2007-01-01

    Over the last decades strong efforts have been made to apply new spaceborn technologies to the study and possible forecast of strong earthquakes. In this study we use ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. A lineament is a straight or a somewhat curved feature in an image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. ...

  1. Evaluierung die FPGA Koprozessoren zur Beschleunigung der Ausführung von Spurrekonstruktionsalgorithmen im ATLAS LVL2-Trigger

    CERN Document Server

    Khomich, Andrei

    2006-01-01

    In the scope of this thesis one of the possible approaches to acceleration the tracking algorithms using the hybrid FPGA/CPU systems has been investigated. The TRT LUT-Hough algorithm - one of the tracking algorithms for ATLAS Level2 trigger - is selected for this purpose. It is a Look-Up Table (LUT) based Hough transform algorithm for Transition Radiation Tracker (TRT). The algorithm was created keeping in mind the B-physic's tasks: fast search for low-pT tracks in entire TRT volume. Such a full subdetector scan requires a lot of computational power. Hybrid implementation of the algorithm (when the most time consuming part of algorithm is accelerated by FPGA co-processor and all other parts are running on a general purpose CPU) is integrated in the same software framework as a C++ implementation for comparison. Identical physical results are obtained for both the CPU and the Hybrid implementations. Timing measurements results show that a critical part, is implemented in VHDL runs on the FPGA co-processor ~4 ...

  2. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  3. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  4. Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration

    Directory of Open Access Journals (Sweden)

    Jose María Armingol

    2010-03-01

    Full Text Available There are increasing applications that require precise calibration of cameras to perform accurate measurements on objects located within images, and an automatic algorithm would reduce this time consuming calibration procedure. The method proposed in this article uses a pattern similar to that of a chess board, which is found automatically in each image, when no information regarding the number of rows or columns is supplied to aid its detection. This is carried out by means of a combined analysis of two Hough transforms, image corners and invariant properties of the perspective transformation. Comparative analysis with more commonly used algorithms demonstrate the viability of the algorithm proposed, as a valuable tool for camera calibration.

  5. Ga-doped ZnO thin film surface characterization by wavelet and fractal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Chenlei; Tang, Wu, E-mail: tang@uestc.edu.cn

    2016-02-28

    Graphical abstract: - Highlights: • Multi-resolution signal decomposition of wavelet transform is applied to Ga-doped ZnO thin films with various thicknesses. • Fractal properties of GZO thin films are investigated by box counting method. • Fractal dimension is not in conformity with original RMS roughness. • Fractal dimension mainly depends on the underside diameter (grain size) and distance between adjacent grains. - Abstract: The change in roughness of various thicknesses Ga-doped ZnO (GZO) thin films deposited by magnetron reactive sputtering on glass substrates at room temperature was measured by atomic force microscopy (AFM). Multi-resolution signal decomposition based on wavelet transform and fractal geometry was applied to process surface profiles, to evaluate the roughness trend of relevant frequency resolution. The results give a six-level decomposition and the results change with deposited time and surface morphology. Also, it is found that fractal dimension is closely connected to the underside diameter (grain size) and the distance between adjacent grains that affect the change rate of surface and the increase of the defects such as abrupt changes lead to a larger value of fractal dimension.

  6. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    Science.gov (United States)

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    -second DTEDRegistered level 0, the USGS and the National Geospatial-Intelligence Agency (NGA) have collaborated to produce an enhanced replacement for GTOPO30, the Global Land One-km Base Elevation (GLOBE) model and other comparable 30-arc-second-resolution global models, using the best available data. The new model is called the Global Multi-resolution Terrain Elevation Data 2010, or GMTED2010 for short. This suite of products at three different resolutions (approximately 1,000, 500, and 250 meters) is designed to support many applications directly by providing users with generic products (for example, maximum, minimum, and median elevations) that have been derived directly from the raw input data that would not be available to the general user or would be very costly and time-consuming to produce for individual applications. The source of all the elevation data is captured in metadata for reference purposes. It is also hoped that as better data become available in the future, the GMTED2010 model will be updated.

  7. Machine Learning Method Applied in Readout System of Superheated Droplet Detector

    Science.gov (United States)

    Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco

    2017-07-01

    Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.

  8. Automated Meteor Detection by All-Sky Digital Camera Systems

    Czech Academy of Sciences Publication Activity Database

    Suk, Tomáš; Šimberová, Stanislava

    2017-01-01

    Roč. 120, č. 3 (2017), s. 189-215 ISSN 0167-9295 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985815 ; RVO:67985556 Keywords : meteor detection * autonomous fireball observatories * fish-eye camera * Hough transformation Subject RIV: IN - Informatics, Computer Science; BN - Astronomy, Celestial Mechanics, Astrophysics (ASU-R) OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8); Astronomy (including astrophysics,space science) (ASU-R) Impact factor: 0.875, year: 2016

  9. The Bargmann transform and canonical transformations

    International Nuclear Information System (INIS)

    Villegas-Blas, Carlos

    2002-01-01

    This paper concerns a relationship between the kernel of the Bargmann transform and the corresponding canonical transformation. We study this fact for a Bargmann transform introduced by Thomas and Wassell [J. Math. Phys. 36, 5480-5505 (1995)]--when the configuration space is the two-sphere S 2 and for a Bargmann transform that we introduce for the three-sphere S 3 . It is shown that the kernel of the Bargmann transform is a power series in a function which is a generating function of the corresponding canonical transformation (a classical analog of the Bargmann transform). We show in each case that our canonical transformation is a composition of two other canonical transformations involving the complex null quadric in C 3 or C 4 . We also describe quantizations of those two other canonical transformations by dealing with spaces of holomorphic functions on the aforementioned null quadrics. Some of these quantizations have been studied by Bargmann and Todorov [J. Math. Phys. 18, 1141-1148 (1977)] and the other quantizations are related to the work of Guillemin [Integ. Eq. Operator Theory 7, 145-205 (1984)]. Since suitable infinite linear combinations of powers of the generating functions are coherent states for L 2 (S 2 ) or L 2 (S 3 ), we show finally that the studied Bargmann transforms are actually coherent states transforms

  10. Tracking within Hadronic Showers in the CALICE SDHCAL prototype using a Hough Transform Technique

    Czech Academy of Sciences Publication Activity Database

    Deng, Z.; Wang, Y.; Yue, Q.; Cvach, Jaroslav; Janata, Milan; Kovalčuk, Michal; Kvasnička, Jiří; Polák, Ivo; Smolík, Jan; Vrba, Václav; Zálešák, Jaroslav; Zuklín, Josef

    2017-01-01

    Roč. 12, May (2017), s. 1-15, č. článku P05009. ISSN 1748-0221 Institutional support: RVO:68378271 Keywords : calorimeter methods * calorimeters * gaseous detectors Subject RIV: BF - Elementary Particles and High Energy Physics OBOR OECD: Particles and field physics Impact factor: 1.220, year: 2016

  11. Automatic 3D building reconstruction from airbornelaser scanning and cadastral data using hough transform

    DEFF Research Database (Denmark)

    Bodum, Lars; Overby, Jens; Kjems, Erik

    2004-01-01

    degree of details. However, it is possible to create virtual 3D models of buildings, by processing these data. Roof polygons are generated using airborne laser scanning of 1x1 meter grid and ground plans (footprints) extracted from technical feature maps. An effective algorithm is used for fixing...... might lead to multiple slightly differing planes. Such planes are detected and merged. Intersecting planes are identified, and a polygon mesh of the roof is constructed. Due to the low precision of the laser scanning, a rule-based postprocessing of the roof is applied before adding the walls....

  12. 4D-CT Lung registration using anatomy-based multi-level multi-resolution optical flow analysis and thin-plate splines.

    Science.gov (United States)

    Min, Yugang; Neylon, John; Shah, Amish; Meeks, Sanford; Lee, Percy; Kupelian, Patrick; Santhanam, Anand P

    2014-09-01

    The accuracy of 4D-CT registration is limited by inconsistent Hounsfield unit (HU) values in the 4D-CT data from one respiratory phase to another and lower image contrast for lung substructures. This paper presents an optical flow and thin-plate spline (TPS)-based 4D-CT registration method to account for these limitations. The use of unified HU values on multiple anatomy levels (e.g., the lung contour, blood vessels, and parenchyma) accounts for registration errors by inconsistent landmark HU value. While 3D multi-resolution optical flow analysis registers each anatomical level, TPS is employed for propagating the results from one anatomical level to another ultimately leading to the 4D-CT registration. 4D-CT registration was validated using target registration error (TRE), inverse consistency error (ICE) metrics, and a statistical image comparison using Gamma criteria of 1 % intensity difference in 2 mm(3) window range. Validation results showed that the proposed method was able to register CT lung datasets with TRE and ICE values <3 mm. In addition, the average number of voxel that failed the Gamma criteria was <3 %, which supports the clinical applicability of the propose registration mechanism. The proposed 4D-CT registration computes the volumetric lung deformations within clinically viable accuracy.

  13. A DTM MULTI-RESOLUTION COMPRESSED MODEL FOR EFFICIENT DATA STORAGE AND NETWORK TRANSFER

    Directory of Open Access Journals (Sweden)

    L. Biagi

    2012-08-01

    Full Text Available In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions. At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a

  14. A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions

    Directory of Open Access Journals (Sweden)

    J. Ray

    2014-09-01

    Full Text Available The characterization of fossil-fuel CO2 (ffCO2 emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP, is used to identify the wavelet coefficients. We find that (i the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.

  15. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    Science.gov (United States)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  16. Circular contour retrieval in real-world conditions by higher order statistics and an alternating-least squares algorithm

    Science.gov (United States)

    Jiang, Haiping; Marot, Julien; Fossati, Caroline; Bourennane, Salah

    2011-12-01

    In real-world conditions, contours are most often blurred in digital images because of acquisition conditions such as movement, light transmission environment, and defocus. Among image segmentation methods, Hough transform requires a computational load which increases with the number of noise pixels, level set methods also require a high computational load, and some other methods assume that the contours are one-pixel wide. For the first time, we retrieve the characteristics of multiple possibly concentric blurred circles. We face correlated noise environment, to get closer to real-world conditions. For this, we model a blurred circle by a few parameters--center coordinates, radius, and spread--which characterize its mean position and gray level variations. We derive the signal model which results from signal generation on circular antenna. Linear antennas provide the center coordinates. To retrieve the circle radii, we adapt the second-order statistics TLS-ESPRIT method for non-correlated noise environment, and propose a novel version of TLS-ESPRIT based on higher-order statistics for correlated noise environment. Then, we derive a least-squares criterion and propose an alternating least-squares algorithm to retrieve simultaneously all spread values of concentric circles. Experiments performed on hand-made and real-world images show that the proposed methods outperform the Hough transform and a level set method dedicated to blurred contours in terms of computational load. Moreover, the proposed model and optimization method provide the information of the contour grey level variations.

  17. Automated Extraction of the Archaeological Tops of Qanat Shafts from VHR Imagery in Google Earth

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2014-12-01

    Full Text Available Qanats in northern Xinjiang of China provide valuable information for agriculturists and anthropologists who seek fundamental understanding of the distribution of qanat water supply systems with regard to water resource utilization, the development of oasis agriculture, and eventually climate change. Only the tops of qanat shafts (TQSs, indicating the course of the qanats, can be observed from space, and their circular archaeological traces can also be seen in very high resolution imagery in Google Earth. The small size of the TQSs, vast search regions, and degraded features make manually extracting them from remote sensing images difficult and costly. This paper proposes an automated TQS extraction method that adopts mathematical morphological processing methods before an edge detecting module is used in the circular Hough transform approach. The accuracy assessment criteria for the proposed method include: (i extraction percentage (E = 95.9%, branch factor (B = 0 and quality percentage (Q = 95.9% in Site 1; and (ii extraction percentage (E = 83.4%, branch factor (B = 0.058 and quality percentage (Q = 79.5% in Site 2. Compared with the standard circular Hough transform, the quality percentages (Q of our proposed method were improved to 95.9% and 79.5% from 86.3% and 65.8% in test sites 1 and 2, respectively. The results demonstrate that wide-area discovery and mapping can be performed much more effectively based on our proposed method.

  18. Backlund transformations as canonical transformations

    International Nuclear Information System (INIS)

    Villani, A.; Zimerman, A.H.

    1977-01-01

    Toda and Wadati as well as Kodama and Wadati have shown that the Backlund transformations, for the exponential lattice equation, sine-Gordon equation, K-dV (Korteweg de Vries) equation and modifies K-dV equation, are canonical transformation. It is shown that the Backlund transformation for the Boussinesq equation, for a generalized K-dV equation, for a model equation for shallow water waves and for the nonlinear Schroedinger equation are also canonical transformations [pt

  19. AUTOMATIC GLOBAL REGISTRATION BETWEEN AIRBORNE LIDAR DATA AND REMOTE SENSING IMAGE BASED ON STRAIGHT LINE FEATURES

    Directory of Open Access Journals (Sweden)

    Z. Q. Liu

    2018-04-01

    Full Text Available An automatic global registration approach for point clouds and remote sensing image based on straight line features is proposed which is insensitive to rotational and scale transformation. First, the building ridge lines and contour lines in point clouds are automatically detected as registration primitives by integrating region growth and topology identification. Second, the collinear condition equation is selected as registration transformation function which is based on rotation matrix described by unit quaternion. The similarity measure is established according to the distance between the corresponding straight line features from point clouds and the image in the same reference coordinate system. Finally, an iterative Hough transform is adopted to simultaneously estimate the parameters and obtain correspondence between registration primitives. Experimental results prove the proposed method is valid and the spectral information is useful for the following classification processing.

  20. Shoreline change after 12 years of tsunami in Banda Aceh, Indonesia: a multi-resolution, multi-temporal satellite data and GIS approach

    Science.gov (United States)

    Sugianto, S.; Heriansyah; Darusman; Rusdi, M.; Karim, A.

    2018-04-01

    The Indian Ocean Tsunami event on the 26 December 2004 has caused severe damage of some shorelines in Banda Aceh City, Indonesia. Tracing back the impact can be seen using remote sensing data combined with GIS. The approach is incorporated with image processing to analyze the extent of shoreline changes with multi-temporal data after 12 years of tsunami. This study demonstrates multi-resolution and multi-temporal satellite images of QuickBird and IKONOS to demarcate the shoreline of Banda Aceh shoreline from before and after tsunami. The research has demonstrated a significant change to the shoreline in the form of abrasion between 2004 and 2005 from few meters to hundred meters’ change. The change between 2004 and 2011 has not returned to the previous stage of shoreline before the tsunami, considered post tsunami impact. The abrasion occurs between 18.3 to 194.93 meters. Further, the change in 2009-2011 shows slowly change of shoreline of Banda Aceh, considered without impact of tsunami e.g. abrasion caused by ocean waves that erode the coast and on specific areas accretion occurs caused by sediment carried by the river flow into the sea near the shoreline of the study area.

  1. New Resolution Strategy for Multi-scale Reaction Waves using Time Operator Splitting and Space Adaptive Multiresolution: Application to Human Ischemic Stroke*

    Directory of Open Access Journals (Sweden)

    Louvet Violaine

    2011-12-01

    Full Text Available We tackle the numerical simulation of reaction-diffusion equations modeling multi-scale reaction waves. This type of problems induces peculiar difficulties and potentially large stiffness which stem from the broad spectrum of temporal scales in the nonlinear chemical source term as well as from the presence of large spatial gradients in the reactive fronts, spatially very localized. A new resolution strategy was recently introduced ? that combines a performing time operator splitting with high oder dedicated time integration methods and space adaptive multiresolution. Based on recent theoretical studies of numerical analysis, such a strategy leads to a splitting time step which is not restricted neither by the fastest scales in the source term nor by stability limits related to the diffusion problem, but only by the physics of the phenomenon. In this paper, the efficiency of the method is evaluated through 2D and 3D numerical simulations of a human ischemic stroke model, conducted on a simplified brain geometry, for which a simple parallelization strategy for shared memory architectures was implemented, in order to reduce computing costs related to “detailed chemistry” features of the model.

  2. Artificial intelligence for networks recognition in remote sensing images

    Science.gov (United States)

    Gilliot, Jean-Marc; Amat, Jean-Louis

    1993-12-01

    We describe here a knowledge-based system, NEXSYS (Nextwork EXtraction SYStem) which was designed for the recognition of communication networks in SPOT satellite images. NEXSYS is a frame-based system and uses a co-operative and distributed structure based on a blackboard architecture. Communication networks in SPOT images are composed of thin linear segments. Segments are extracted using mathematical morphology and a Hough transform. An intermediate image representation composed of geometric primitives is obtained. Then an expert module is able to process the segments at the symbolic level trying to recognize networks.

  3. Use of image-processing tools for texture analysis of high-energy X-ray synchrotron data

    DEFF Research Database (Denmark)

    Fisker, Rune; Poulsen, Henning Friis; Schou, Jørgen

    1998-01-01

    , it is found that a full alignment of the two-dimensional detector used in many cases is impractical and that data-sets are often partially subject to geometric restrictions. Estimating the parameters of the traces of the Debye-Scherrer cones on the detector therefore becomes a concern. Moreover...... on a combination of a circular Hough transform and nonlinear least-squares fitting. Using the estimated ellipses the background is subtracted and the intensity along the Debye-Scherrer cones is integrated by a combined fit of the local diffraction pattern. The corresponding algorithms are presented together...

  4. Distributed Data Logging and Intelligent Control Strategies for a Scaled Autonomous Vehicle

    OpenAIRE

    Tilman Happek; Uwe Lang; Torben Bockmeier; Dimitrji Neubauer; Alexander Kuznietsov

    2016-01-01

    In this paper we present an autonomous car with distributed data processing. The car is controlled by a multitude of independent sensors. For the lane detection, a camera is used, which detects the lane marks with a Hough transformation. Once the camera detects these, one of them is calculated to be followed by the car. This lane is verified by the other sensors of the car. These sensors check the route for obstructions or allow the car to scan a parking space and to park on the roadside if t...

  5. MADNESS applied to density functional theory in chemistry and nuclear physics

    International Nuclear Information System (INIS)

    Fann, G I; Harrison, R J; Beylkin, G; Jia, J; Hartman-Baker, R; Shelton, W A; Sugiki, S

    2007-01-01

    We describe some recent mathematical results in constructing computational methods that lead to the development of fast and accurate multiresolution numerical methods for solving quantum chemistry and nuclear physics problems based on Density Functional Theory (DFT). Using low separation rank representations of functions and operators in conjunction with representations in multiwavelet bases, we developed a multiscale solution method for integral and differential equations and integral transforms. The Poisson equation, the Schrodinger equation, and the projector on the divergence free functions provide important examples with a wide range of applications in computational chemistry, nuclear physics, computational electromagnetic and fluid dynamics. We have implemented this approach along with adaptive representations of operators and functions in the multiwavelet basis and low separation rank (LSR) approximation of operators and functions. These methods have been realized and implemented in a software package called Multiresolution Adaptive Numerical Evaluation for Scientific Simulation (MADNESS)

  6. Recognition of power quality events by using multiwavelet-based neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Kaewarsa, Suriya; Attakitmongcol, Kitti; Kulworawanichpong, Thanatchai [School of Electrical Engineering, Suranaree University of Technology, 111 University Avenue, Muang District, Nakhon Ratchasima 30000 (Thailand)

    2008-05-15

    Recognition of power quality events by analyzing the voltage and current waveform disturbances is a very important task for the power system monitoring. This paper presents a novel approach for the recognition of power quality disturbances using multiwavelet transform and neural networks. The proposed method employs the multiwavelet transform using multiresolution signal decomposition techniques working together with multiple neural networks using a learning vector quantization network as a powerful classifier. Various transient events are tested, such as voltage sag, swell, interruption, notching, impulsive transient, and harmonic distortion show that the classifier can detect and classify different power quality signal types efficiency. (author)

  7. Signal and image representation in combined spaces

    CERN Document Server

    Zeevi, Yehoshua; Chui, Charles K

    1997-01-01

    This volume explains how the recent advances in wavelet analysis provide new means for multiresolution analysis and describes its wide array of powerful tools. The book covers variations of the windowed Fourier transform, constructions of special waveforms suitable for specific tasks, the use of redundant representations in reconstruction and enhancement, applications of efficient numerical compression as a tool for fast numerical analysis, and approximation properties of various waveforms in different contexts.

  8. Multi-slice ultrasound image calibration of an intelligent skin-marker for soft tissue artefact compensation.

    Science.gov (United States)

    Masum, M A; Pickering, M R; Lambert, A J; Scarvell, J M; Smith, P N

    2017-09-06

    In this paper, a novel multi-slice ultrasound (US) image calibration of an intelligent skin-marker used for soft tissue artefact compensation is proposed to align and orient image slices in an exact H-shaped pattern. Multi-slice calibration is complex, however, in the proposed method, a phantom based visual alignment followed by transform parameters estimation greatly reduces the complexity and provides sufficient accuracy. In this approach, the Hough Transform (HT) is used to further enhance the image features which originate from the image feature enhancing elements integrated into the physical phantom model, thus reducing feature detection uncertainty. In this framework, slice by slice image alignment and calibration are carried out and this provides manual ease and convenience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    Directory of Open Access Journals (Sweden)

    Chih-Lung Lin

    2015-12-01

    Full Text Available In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1 automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2 applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3 extracting the line-like features (LLFs from the fused image; (4 obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5 using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  10. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.

    Science.gov (United States)

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-12-12

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  11. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    Science.gov (United States)

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-01-01

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596

  12. Transformer Protection Using the Wavelet Transform

    OpenAIRE

    ÖZGÖNENEL, Okan; ÖNBİLGİN, Güven; KOCAMAN, Çağrı

    2014-01-01

    This paper introduces a novel approach for power transformer protection algorithm. Power system signals such as current and voltage have traditionally been analysed by the Fast Fourier Transform. This paper aims to prove that the Wavelet Transform is a reliable and computationally efficient tool for distinguishing between the inrush currents and fault currents. The simulated results presented clearly show that the proposed technique for power transformer protection facilitates the a...

  13. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    Science.gov (United States)

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The DATCON system of the Belle II experiment. Tracking and data reduction

    Energy Technology Data Exchange (ETDEWEB)

    Wessel, Christian; Dingfelder, Jochen; Marinas, Carlos; Deschamps, Bruno [Universitaet Bonn (Germany). Physikalisches Institut

    2016-07-01

    The SuperKEKB e{sup +}e{sup -} accelerator at KEK in Japan will have a luminosity which is a factor of 40 higher than the luminosity of its predecessor KEKB. The Belle II detector at SuperKEKB will contain a two-layer pixel detector at radii of 1.421 and 2.179 cm from the interaction point, based on the DEPFET (DEpleted P-channel Field Effect Transistor) technology. It is surrounded by four layers of strip detectors. Due to the high collision rate, the data rate of the pixel detector needs to by drastically reduced by an online data reduction system. The DATCON (Data Acquisition Tracking and Concentrator Online Node) system performs track reconstruction in the SVD (Strip Vertex Detector) and extrapolates to the PXD (PiXel Detector) to calculate ROI and to keep only hits in the ROI. The track reconstruction algorithm is based on a Hough transform, which reduces track finding to finding intersection points in the Hough parameter space. In this talk the employed algorithm for fast online track reconstruction on FPGA, ROI finding and the performance of the data reduction are presented.

  15. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    Science.gov (United States)

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  16. A 4.5 km resolution Arctic Ocean simulation with the global multi-resolution model FESOM 1.4

    Science.gov (United States)

    Wang, Qiang; Wekerle, Claudia; Danilov, Sergey; Wang, Xuezhu; Jung, Thomas

    2018-04-01

    In the framework of developing a global modeling system which can facilitate modeling studies on Arctic Ocean and high- to midlatitude linkage, we evaluate the Arctic Ocean simulated by the multi-resolution Finite Element Sea ice-Ocean Model (FESOM). To explore the value of using high horizontal resolution for Arctic Ocean modeling, we use two global meshes differing in the horizontal resolution only in the Arctic Ocean (24 km vs. 4.5 km). The high resolution significantly improves the model's representation of the Arctic Ocean. The most pronounced improvement is in the Arctic intermediate layer, in terms of both Atlantic Water (AW) mean state and variability. The deepening and thickening bias of the AW layer, a common issue found in coarse-resolution simulations, is significantly alleviated by using higher resolution. The topographic steering of the AW is stronger and the seasonal and interannual temperature variability along the ocean bottom topography is enhanced in the high-resolution simulation. The high resolution also improves the ocean surface circulation, mainly through a better representation of the narrow straits in the Canadian Arctic Archipelago (CAA). The representation of CAA throughflow not only influences the release of water masses through the other gateways but also the circulation pathways inside the Arctic Ocean. However, the mean state and variability of Arctic freshwater content and the variability of freshwater transport through the Arctic gateways appear not to be very sensitive to the increase in resolution employed here. By highlighting the issues that are independent of model resolution, we address that other efforts including the improvement of parameterizations are still required.

  17. A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    H. Ju

    2012-07-01

    Full Text Available Bridge areas present difficulties for orthophotos generation and to avoid “collapsed” bridges in the orthoimage, operator assistance is required to create the precise DBM (Digital Bridge Model, which is, subsequently, used for the orthoimage generation. In this paper, a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging data and aerial imagery, is proposed. The no precise exterior orientation of the aerial image is required for the DBM generation. First, a coarse DBM is produced from LiDAR data. Then, a robust co-registration between LiDAR intensity and aerial image using the orientation constraint is performed. The from-coarse-to-fine hybrid co-registration approach includes LPFFT (Log-Polar Fast Fourier Transform, Harris Corners, PDF (Probability Density Function feature descriptor mean-shift matching, and RANSAC (RANdom Sample Consensus as main components. After that, bridge ROI (Region Of Interest from LiDAR data domain is projected to the aerial image domain as the ROI in the aerial image. Hough transform linear features are extracted in the aerial image ROI. For the straight bridge, the 1st order polynomial function is used; whereas, for the curved bridge, 2nd order polynomial function is used to fit those endpoints of Hough linear features. The last step is the transformation of the smooth bridge boundaries from aerial image back to LiDAR data domain and merge them with the coarse DBM. Based on our experiments, this new approach is capable of providing precise DBM which can be further merged with DTM (Digital Terrain Model derived from LiDAR data to obtain the precise DSM (Digital Surface Model. Such a precise DSM can be used to improve the orthophoto product quality.

  18. The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    Energy Technology Data Exchange (ETDEWEB)

    Kingsbury, J Ng and N G [Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)

    2004-02-06

    This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar

  19. Wavelet analysis as a tool to characteriseand remove environmental noisefrom self-potential time series

    OpenAIRE

    Chianese, D.; Colangelo, G.; D'Emilio, M.; Lanfredi, M.; Lapenna, V.; Ragosta, M.; Macchiato, M. F.

    2004-01-01

    Multiresolution wavelet analysis of self-potential signals and rainfall levels is performed for extracting fluctuations in electrical signals, which might be addressed to meteorological variability. In the time-scale domain of the wavelet transform, rain data are used as markers to single out those wavelet coefficients of the electric signal which can be considered relevant to the environmental disturbance. Then these coefficients are filtered out and the signal is recovered by anti...

  20. Wavelet analysis for nonstationary signals

    International Nuclear Information System (INIS)

    Penha, Rosani Maria Libardi da

    1999-01-01

    Mechanical vibration signals play an important role in anomalies identification resulting of equipment malfunctioning. Traditionally, Fourier spectral analysis is used where the signals are assumed to be stationary. However, occasional transient impulses and start-up process are examples of nonstationary signals that can be found in mechanical vibrations. These signals can provide important information about the equipment condition, as early fault detection. The Fourier analysis can not adequately be applied to nonstationary signals because the results provide data about the frequency composition averaged over the duration of the signal. In this work, two methods for nonstationary signal analysis are used: Short Time Fourier Transform (STFT) and wavelet transform. The STFT is a method of adapting Fourier spectral analysis for nonstationary application to time-frequency domain. To have a unique resolution throughout the entire time-frequency domain is its main limitation. The wavelet transform is a new analysis technique suitable to nonstationary signals, which handles the STFT drawbacks, providing multi-resolution frequency analysis and time localization in a unique time-scale graphic. The multiple frequency resolutions are obtained by scaling (dilatation/compression) the wavelet function. A comparison of the conventional Fourier transform, STFT and wavelet transform is made applying these techniques to: simulated signals, arrangement rotor rig vibration signal and rotate machine vibration signal Hanning window was used to STFT analysis. Daubechies and harmonic wavelets were used to continuos, discrete and multi-resolution wavelet analysis. The results show the Fourier analysis was not able to detect changes in the signal frequencies or discontinuities. The STFT analysis detected the changes in the signal frequencies, but with time-frequency resolution problems. The wavelet continuos and discrete transform demonstrated to be a high efficient tool to detect

  1. Electrical transformer handbook

    Energy Technology Data Exchange (ETDEWEB)

    Hurst, R.W.; Horne, D. (eds.)

    2005-07-01

    This handbook is a valuable user guide intended for electrical engineering and maintenance personnel, electrical contractors and electrical engineering students. It provides current information on techniques and technologies that can help extend the life of transformers. It discusses transformer testing, monitoring, design, commissioning, retrofitting and other elements involved in keeping electrical transformers in safe and efficient operation. It demonstrates how a power transformer can be put to use and common problems faced by owners. In addition to covering control techniques, testing and maintenance procedures, this handbook covers the power transformer; control electrical power transformer; electrical power transformer; electrical theory transformer; used electrical transformer; down electrical step transformer; electrical manufacturer transformer; electrical picture transformer; electrical transformer work; electrical surplus transformer; current transformer; step down transformer; voltage transformer; step up transformer; isolation transformer; low voltage transformer; toroidal transformer; high voltage transformer; and control power transformer. The handbook includes articles from leading experts on overcurrent protection of transformers; ventilated dry-type transformers; metered load factors for low-voltage, and dry-type transformers in buildings. The maintenance of both dry-type or oil-filled transformers was discussed with reference to sealing, gaskets, oils, moisture and testing. The adoption of dynamic load practices was also discussed along with the reclamation or recycling of used lube oil, transformer dielectric fluids and aged solid insulation. A buyer's guide and directory of transformer manufacturers and suppliers was also included. refs., tabs., figs.

  2. Probabilistic risk assessment on maritime spent nuclear fuel transportation (Part II: Ship collision probability)

    International Nuclear Information System (INIS)

    Christian, Robby; Kang, Hyun Gook

    2017-01-01

    This paper proposes a methodology to assess and reduce risks of maritime spent nuclear fuel transportation with a probabilistic approach. Event trees detailing the progression of collisions leading to transport casks’ damage were constructed. Parallel and crossing collision probabilities were formulated based on the Poisson distribution. Automatic Identification System (AIS) data were processed with the Hough Transform algorithm to estimate possible intersections between the shipment route and the marine traffic. Monte Carlo simulations were done to compute collision probabilities and impact energies at each intersection. Possible safety improvement measures through a proper selection of operational transport parameters were investigated. These parameters include shipment routes, ship's cruise velocity, number of transport casks carried in a shipment, the casks’ stowage configuration and loading order on board the ship. A shipment case study is presented. Waters with high collision probabilities were identified. Effective range of cruising velocity to reduce collision risks were discovered. The number of casks in a shipment and their stowage method which gave low cask damage frequencies were obtained. The proposed methodology was successful in quantifying ship collision and cask damage frequency. It was effective in assisting decision making processes to minimize risks in maritime spent nuclear fuel transportation. - Highlights: • Proposes a probabilistic framework on the safety of spent nuclear fuel transportation by sea. • Developed a marine traffic simulation model using Generalized Hough Transform (GHT) algorithm. • A transportation case study on South Korean waters is presented. • Single-vessel risk reduction method is outlined by optimizing transport parameters.

  3. New algorithm for detecting smaller retinal blood vessels in fundus images

    Science.gov (United States)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  4. A relation connecting scale transformation, Galilean transformation and Baecklund transformation for the nonlinear Schroedinger equation

    International Nuclear Information System (INIS)

    Steudel, H.

    1980-01-01

    It is shown that the two-parameter manifold of Baecklund transformations known for the nonlinear Schroedinger equation can be generated from one Baecklund transformation with specified parameters by use of scale transformation and Galilean transformation. (orig.)

  5. Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata.

    Science.gov (United States)

    Ghanizadeh, Afshin; Abarghouei, Amir Atapour; Sinaie, Saman; Saad, Puteh; Shamsuddin, Siti Mariyam

    2011-07-01

    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.

  6. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  7. TRANSFORMATION

    Energy Technology Data Exchange (ETDEWEB)

    LACKS,S.A.

    2003-10-09

    Transformation, which alters the genetic makeup of an individual, is a concept that intrigues the human imagination. In Streptococcus pneumoniae such transformation was first demonstrated. Perhaps our fascination with genetics derived from our ancestors observing their own progeny, with its retention and assortment of parental traits, but such interest must have been accelerated after the dawn of agriculture. It was in pea plants that Gregor Mendel in the late 1800s examined inherited traits and found them to be determined by physical elements, or genes, passed from parents to progeny. In our day, the material basis of these genetic determinants was revealed to be DNA by the lowly bacteria, in particular, the pneumococcus. For this species, transformation by free DNA is a sexual process that enables cells to sport new combinations of genes and traits. Genetic transformation of the type found in S. pneumoniae occurs naturally in many species of bacteria (70), but, initially only a few other transformable species were found, namely, Haemophilus influenzae, Neisseria meningitides, Neisseria gonorrheae, and Bacillus subtilis (96). Natural transformation, which requires a set of genes evolved for the purpose, contrasts with artificial transformation, which is accomplished by shocking cells either electrically, as in electroporation, or by ionic and temperature shifts. Although such artificial treatments can introduce very small amounts of DNA into virtually any type of cell, the amounts introduced by natural transformation are a million-fold greater, and S. pneumoniae can take up as much as 10% of its cellular DNA content (40).

  8. Classification of displacive transformations: what is a martensitic transformation?

    International Nuclear Information System (INIS)

    Christian, J.W.; Olson, G.B.; Cohen, M.

    1995-01-01

    The displacive transformation classification proposed at ICOMAT 79 is reviewed in light of recent progress in mechanistic understanding. Issues considered include distinctions between shuffle transformation vs. self-accommodating shear, dilatation vs. shear-dominant transformation, and nucleated vs. continuous transformation. (orig.)

  9. Distinctive transforming genes in x-ray-transformed mammalian cells

    International Nuclear Information System (INIS)

    Borek, C.; Ong, A.; Mason, H.

    1987-01-01

    DNAs from hamster embryo cells and mouse C3H/10T1/2 cells transformed in vitro by x-irradiation into malignant cells transmit the radiation transformation phenotype by producing transformed colonies (transfectants) in two mouse recipient lines, the NIH 3T3 and C3H/101/2 cells, and in a rat cell line, the Rat-2 cells. DNAs from unirradiated cells or irradiated and visibly untransformed cells do not produce transformed colonies. The transfectant grow in agar and form tumors in nude mice. Treatment of the DNAs with restriction endonucleases prior to transfection indicates that the same transforming gene (oncogene) is present in each of the transformed mouse cells and is the same in each of the transformed hamster cells. Southern blot analysis of 3T3 or Rat-2 transfectants carrying oncogenes from radiation-transformed C3H/10T1/2 or hamster cells indicates that the oncogenes responsible for the transformation of 3T3 cells are not the Ki-ras, Ha-ras, N-ras genes, nor are they neu, trk, raf, abl, or fms. The work demonstrates that DNAs from mammalian cells transformed into malignancy by direct exposure in vitro to radiation contain genetic sequences with detectable transforming activity in three recipient cell lines. The results provide evidence that DNA is the target of radiation carcinogenesis induced at a cellular level in vitro. The experiments indicate that malignant radiogenic transformation in vitro of hamster embryo and mouse C3H/10T1/2 cells involves the activation of unique non-ras transforming genes, which heretofore have not been described

  10. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    Science.gov (United States)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  11. Multiscale image contrast amplification (MUSICA)

    Science.gov (United States)

    Vuylsteke, Pieter; Schoeters, Emile P.

    1994-05-01

    This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.

  12. The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    International Nuclear Information System (INIS)

    Kingsbury, J Ng and N G

    2004-01-01

    This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar

  13. Adaption of optical Fresnel transform to optical Wigner transform

    International Nuclear Information System (INIS)

    Lv Cuihong; Fan Hongyi

    2010-01-01

    Enlightened by the algorithmic isomorphism between the rotation of the Wigner distribution function (WDF) and the αth fractional Fourier transform, we show that the optical Fresnel transform performed on the input through an ABCD system makes the output naturally adapting to the associated Wigner transform, i.e. there exists algorithmic isomorphism between ABCD transformation of the WDF and the optical Fresnel transform. We prove this adaption in the context of operator language. Both the single-mode and the two-mode Fresnel operators as the image of classical Fresnel transform are introduced in our discussions, while the two-mode Wigner operator in the entangled state representation is introduced for fitting the two-mode Fresnel operator.

  14. Elastic tracking versus neural network tracking for very high multiplicity problems

    International Nuclear Information System (INIS)

    Harlander, M.; Gyulassy, M.

    1991-04-01

    A new Elastic Tracking (ET) algorithm is proposed for finding tracks in very high multiplicity and noisy environments. It is based on a dynamical reinterpretation and generalization of the Radon transform and is related to elastic net algorithms for geometrical optimization. ET performs an adaptive nonlinear fit to noisy data with a variable number of tracks. Its numerics is more efficient than that of the traditional Radon or Hough transform method because it avoids binning of phase space and the costly search for valid minima. Spurious local minima are avoided in ET by introducing a time-dependent effective potential. The method is shown to be very robust to noise and measurement error and extends tracking capabilities to much higher track densities than possible via local road finding or even the novel Denby-Peterson neural network tracking algorithms. 12 refs., 2 figs

  15. Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.

  16. Wavelet analysis as a tool to characteriseand remove environmental noisefrom self-potential time series

    Directory of Open Access Journals (Sweden)

    M. Ragosta

    2004-06-01

    Full Text Available Multiresolution wavelet analysis of self-potential signals and rainfall levels is performed for extracting fluctuations in electrical signals, which might be addressed to meteorological variability. In the time-scale domain of the wavelet transform, rain data are used as markers to single out those wavelet coefficients of the electric signal which can be considered relevant to the environmental disturbance. Then these coefficients are filtered out and the signal is recovered by anti-transforming the retained coefficients. Such methodological approach might be applied to characterise unwanted environmental noise. It also can be considered as a practical technique to remove noise that can hamper the correct assessment and use of electrical techniques for the monitoring of geophysical phenomena.

  17. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    Science.gov (United States)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  18. Discrete fourier transform (DFT) analysis for applications using iterative transform methods

    Science.gov (United States)

    Dean, Bruce H. (Inventor)

    2012-01-01

    According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.

  19. Image processing for safety assessment in civil engineering.

    Science.gov (United States)

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  20. Study of system for segmentation of images and elaboration of algorithms for three dimensional scene reconstruction

    International Nuclear Information System (INIS)

    Bufacchi, A.; Tripi, A.

    1995-09-01

    The aim of this paper is the presentation of a series of methodologies to recognize and to obtain a three-dimensional reconstruction of an inner architectural scene, using a gray level image obtained using a TV camera. In the first part of the work, a series of methods used to find the edges in an effective way are critically compared, obtaining a binary image, and then the application of the Hough transform to such binary image to find the straight lines in the original image are discussed. In the second part, an algorithm is shown in order to find the vanishing points in such image

  1. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  2. ROBÔ AUTÔNOMO MÓVEL SEGUIDOR DE SÍMBOLO

    Directory of Open Access Journals (Sweden)

    Juliano Nishizima

    2016-03-01

    Full Text Available This article describes the development of a mobile robot follower of the AGV type symbol (Guiding Autonomous Vehicles. The robot is composed of two parts: hardware and software. The hardware consists of a local computer that processes the images that are sent from a mobile camera with Android operating system and the Arduino responsible for reading all sensors and control actuators. The software consists of computer vision algorithms which we used the OpenCV library (Open Computer Vision liberary for application of algorithms such as: Hough Transform, Gaussian Filter Smooth and InRange operation to identify the symbol to be followed by the robot.

  3. Transformative Learning

    Science.gov (United States)

    Wang, Victor C. X.; Cranton, Patricia

    2011-01-01

    The theory of transformative learning has been explored by different theorists and scholars. However, few scholars have made an attempt to make a comparison between transformative learning and Confucianism or between transformative learning and andragogy. The authors of this article address these comparisons to develop new and different insights…

  4. Wavelet Filter Banks for Super-Resolution SAR Imaging

    Science.gov (United States)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  5. Negotiated Grammar Transformation

    NARCIS (Netherlands)

    V. Zaytsev (Vadim)

    2012-01-01

    htmlabstractIn this paper, we study controlled adaptability of metamodel transformations. We consider one of the most rigid metamodel transformation formalisms — automated grammar transformation with operator suites, where a transformation script is built in such a way that it is essentially meant

  6. Diffusionless phase transformations

    International Nuclear Information System (INIS)

    Vejman, K.M.

    1987-01-01

    Diffusionless phase transformations in metals and alloys in the process of which atomic displacements occur at the distances lower than interatomic ones and relative correspondence of neighbour atoms is preserved, are considered. Special attention is paid to the mechanism of martensitic transformations. Phenomenologic crystallographical theory of martensitic transformations are presented. Two types of martensitic transformations different from the energy viewpoint are pointed out - thermoelastic and non-thermoelastic ones - which are characterized by transformation hysteresis and ways of martensite - initial phase reverse transformation realization. Mechanical effect in the martensitic transformations have been analyzed. The problem of diffusionless formation of ω-phases and the effect of impurities and vacancies on the process are briefly discussed. The role of charge density waves in phase transformations of the second type (transition of initial phase into noncommensurate one) and of the first type (transition of noncommensurate phase into commensurate one) is considered

  7. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks.

    Science.gov (United States)

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P

    2017-01-07

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  8. Power transformers - Part 11: Dry-type transformers

    CERN Document Server

    International Electrotechnical Commission. Geneva

    2004-01-01

    Applies to dry-type power transformers (including auto-transformers) having values of highest voltage for equipment up to and including 36 kV and at least one winding operating at greater than 1,1 kV. Applies to all construction technologies.

  9. Visualizing Transformation

    DEFF Research Database (Denmark)

    Pedersen, Pia

    2012-01-01

    Transformation, defined as the step of extracting, arranging and simplifying data into visual form (M. Neurath, 1974), was developed in connection with ISOTYPE (International System Of TYpographic Picture Education) and might well be the most important legacy of Isotype to the field of graphic...... design. Recently transformation has attracted renewed interest because of the book The Transformer written by Robin Kinross and Marie Neurath. My on-going research project, summarized in this paper, identifies and depicts the essential principles of data visualization underlying the process...... of transformation with reference to Marie Neurath’s sketches on the Bilston Project. The material has been collected at the Otto and Marie Neurath Collection housed at the University of Reading, UK. By using data visualization as a research method to look directly into the process of transformation, the project...

  10. Generalized Staeckel transform and reciprocal transformations for finite-dimensional integrable systems

    Energy Technology Data Exchange (ETDEWEB)

    Sergyeyev, Artur [Mathematical Institute, Silesian University in Opava, Na RybnIcku 1, 746 01 Opava (Czech Republic); Blaszak, Maciej [Institute of Physics, A Mickiewicz University, Umultowska 85, 61-614 Poznan (Poland)], E-mail: Artur.Sergyeyev@math.slu.cz, E-mail: blaszakm@amu.edu.pl

    2008-03-14

    We present a multiparameter generalization of the Staeckel transform (the latter is also known as the coupling-constant metamorphosis) and show that under certain conditions this generalized Staeckel transform preserves Liouville integrability, noncommutative integrability and superintegrability. The corresponding transformation for the equations of motion proves to be nothing but a reciprocal transformation of a special form, and we investigate the properties of this reciprocal transformation. Finally, we show that the Hamiltonians of the systems possessing separation curves of apparently very different form can be related through a suitably chosen generalized Staeckel transform.

  11. Performance of nonsynchronous noncommensurate impedance transformers in comparison to tapered line transformers

    DEFF Research Database (Denmark)

    Kim, Kseniya; Zhurbenko, Vitaliy; Johansen, Tom Keinicke

    2012-01-01

    to a traditional tapered line impedance transformer. The increase in bandwidth of nonsynchronous noncommensurate impedance transformers typically leads to shortening the transformer length, which makes the transformer attractive for applications, where a wide operating band and high transformation ratios...

  12. Denoising of MR images using FREBAS collaborative filtering

    International Nuclear Information System (INIS)

    Ito, Satoshi; Hizume, Masayuki; Yamada, Yoshifumi

    2011-01-01

    We propose a novel image denoising strategy based on the correlation in the FREBAS transformed domain. FREBAS transform is a kind of multi-resolution image analysis which consists of two different Fresnel transforms. It can decompose images into down-scaled images of the same size with a different frequency bandwidth. Since these decomposed images have similar distributions for the same directions from the center of the FREBAS domain, even when the FREBAS signal is hidden by noise in the case of a low-signal-to-noise ratio (SNR) image, the signal distribution can be estimated using the distribution of the FREBAS signal located near the position of interest. We have developed a collaborative Wiener filter in the FREBAS transformed domain which implements collaboration of the standard deviation of the position of interest and that of analogous positions. The experimental results demonstrated that the proposed algorithm improves the SNR in terms of both the total SNR and the SNR at the edges of images. (author)

  13. Adaptive multiscale processing for contrast enhancement

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.

    1993-07-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.

  14. Wavelet basics

    CERN Document Server

    Chan, Y T

    1995-01-01

    Since the study of wavelets is a relatively new area, much of the research coming from mathematicians, most of the literature uses terminology, concepts and proofs that may, at times, be difficult and intimidating for the engineer. Wavelet Basics has therefore been written as an introductory book for scientists and engineers. The mathematical presentation has been kept simple, the concepts being presented in elaborate detail in a terminology that engineers will find familiar. Difficult ideas are illustrated with examples which will also aid in the development of an intuitive insight. Chapter 1 reviews the basics of signal transformation and discusses the concepts of duals and frames. Chapter 2 introduces the wavelet transform, contrasts it with the short-time Fourier transform and clarifies the names of the different types of wavelet transforms. Chapter 3 links multiresolution analysis, orthonormal wavelets and the design of digital filters. Chapter 4 gives a tour d'horizon of topics of current interest: wave...

  15. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    Science.gov (United States)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar

  16. On the windowed Fourier transform as an interpolation of the Gabor transform

    NARCIS (Netherlands)

    Bastiaans, M.J.; Prochßzka, A.; Uhlør, J.; Sovka, P.

    1997-01-01

    The windowed Fourier transform and its sampled version - the Gabor transform - are introduced. With the help of Gabor's signal expansion, an interpolation function is derived with which the windowed Fourier transform can be constructed from the Gabor transform. Using the Zak transform, it is shown

  17. Natural plasmid transformation in a high-frequency-of transformation marine Vibrio strain

    International Nuclear Information System (INIS)

    Frischer, M.E.; Thurmond, J.M.; Paul, J.H.

    1990-01-01

    The estuarine bacterium Vibrio strain DI-9 has been shown to be naturally transformable with both broad host range plasmid multimers and homologous chromosomal DNA at average frequencies of 3.5 x 10 -9 and 3.4 x 10 -7 transformants per recipient, respectively. Growth of plasmid transformants in nonselective medium resulted in cured strains that transformed 6 to 42,857 times more frequently than the parental strain, depending on the type of transforming DNA. These high-frequency-of-transformation (HfT) strains were transformed at frequencies ranging from 1.1 x 10 -8 to 1.3 x 10 -4 transformants per recipient with plasmid DNA and at an average frequency of 8.3 x 10 -5 transformants per recipient with homologous chromosomal DNA. The highest transformation frequencies were observed by using multimers of an R1162 derivative carrying the transposon Tn5 (pQSR50). Probing of total DNA preparations from one of the cured strains demonstrated that no plasmid DNA remained in the cured strains which may have provided homology to the transforming DNA. All transformants and cured strains could be differentiated from the parental strains by colony morphology. DNA binding studies indicated that late-log-phase HfT strains bound [ 3 H]bacteriophage lambda DNA 2.1 times more rapidly than the parental strain. These results suggest that the original plasmid transformation event of strain DI-9 was the result of uptake and expression of plasmid DNA by a competent mutant (HfT strain). Additionally, it was found that a strain of Vibrio parahaemolyticus, USFS 3420, could be naturally transformed with plasmid DNA. Natural plasmid transformation by high-transforming mutants may be a means of plasmid acquisition by natural aquatic bacterial populations

  18. Approximating the Analytic Fourier Transform with the Discrete Fourier Transform

    OpenAIRE

    Axelrod, Jeremy

    2015-01-01

    The Fourier transform is approximated over a finite domain using a Riemann sum. This Riemann sum is then expressed in terms of the discrete Fourier transform, which allows the sum to be computed with a fast Fourier transform algorithm more rapidly than via a direct matrix multiplication. Advantages and limitations of using this method to approximate the Fourier transform are discussed, and prototypical MATLAB codes implementing the method are presented.

  19. TRANSFORMER

    Science.gov (United States)

    Baker, W.R.

    1959-08-25

    Transformers of a type adapted for use with extreme high power vacuum tubes where current requirements may be of the order of 2,000 to 200,000 amperes are described. The transformer casing has the form of a re-entrant section being extended through an opening in one end of the cylinder to form a coaxial terminal arrangement. A toroidal multi-turn primary winding is disposed within the casing in coaxial relationship therein. In a second embodiment, means are provided for forming the casing as a multi-turn secondary. The transformer is characterized by minimized resistance heating, minimized external magnetic flux, and an economical construction.

  20. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mărăscu, V.; Dinescu, G. [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania); Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Chiţescu, I. [Faculty of Mathematics and Computer Science, University of Bucharest, 14 Academiei Street, Bucharest (Romania); Barna, V. [Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B., E-mail: mitub@infim.ro [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania)

    2016-03-25

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.

  1. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    International Nuclear Information System (INIS)

    Mărăscu, V.; Dinescu, G.; Chiţescu, I.; Barna, V.; Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B.

    2016-01-01

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.

  2. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  3. On Hurwitz transformations

    International Nuclear Information System (INIS)

    Kibler, M.; Hage Hassan, M.

    1991-04-01

    A bibliography on the Hurwitz transformations is given. We deal here, with some details, with two particular Hurwitz transformations, viz, the R 4 → R 3 Kustaanheimo-Stiefel transformation and its R 8 → R 5 compact extension. These transformations are derived in the context of Fock-Bargmann-Schwinger calculus with special emphasis on angular momentum theory

  4. Transformation

    DEFF Research Database (Denmark)

    Bock, Lars Nicolai

    2011-01-01

    Artiklen diskuterer ordet "transformation" med udgangspunkt i dels hvorledes ordet bruges i arkitektfaglig terminologi og dels med fokus på ordets potentielle indhold og egnethed i samme teminologi.......Artiklen diskuterer ordet "transformation" med udgangspunkt i dels hvorledes ordet bruges i arkitektfaglig terminologi og dels med fokus på ordets potentielle indhold og egnethed i samme teminologi....

  5. Event reconstruction in the RICH detector of the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Adamczewski, J.; Becker, K.-H.; Belogurov, S.; Boldyreva, N.; Chernogorov, A.; Deveaux, C.; Dobyrn, V.; Dürr, M.; Eom, J.; Eschke, J.; Höhne, C.; Kampert, K.-H.; Kleipa, V.; Kochenda, L.; Kolb, B.; Kopfer, J.; Kravtsov, P.; Lebedev, S.; Lebedeva, E.; Leonova, E.

    2014-01-01

    The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility will investigate the QCD phase diagram at high net-baryon densities and moderate temperatures. One of the key signatures will be di-leptons emitted from the hot and dense phase in heavy-ion collisions. Measuring di-electrons, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transition Radiation Detectors (TRD). In order to access the foreseen rare probes, the detector and the data acquisition have to handle interaction rates up to 10 MHz. Therefore, the development of fast and efficient event reconstruction algorithms is an important and challenging task in CBM. In this contribution event reconstruction and electron identification algorithms in the RICH detector are presented. So far they have been developed on simulated data but could already be tested on real data from a RICH prototype testbeam experiment at the CERN-PS. Efficient and fast ring recognition algorithms in the CBM-RICH are based on the Hough Transform method. Due to optical distortions of the rings, an ellipse fitting algorithm was elaborated to improve the ring radius resolution. An efficient algorithm based on the Artificial Neural Network was implemented for electron identification in RICH. All algorithms were significantly optimized to achieve maximum speed and minimum memory consumption. - Highlights: • Ring Imaging Cherenkov detector will serve for electron identification in CBM. • We present efficient ring recognition algorithm based on the Hough Transform method. • Developed algorithms were significantly optimized to achieve maximum speed up. • Electron identification algorithm in RICH based on the Artificial Neural Network. • Developed algorithms were successfully tested on real data from the RICH prototype

  6. Event reconstruction in the RICH detector of the CBM experiment at FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Adamczewski, J. [GSI Darmstadt (Germany); Becker, K.-H. [University Wuppertal (Germany); Belogurov, S. [ITEP Moscow (Russian Federation); Boldyreva, N. [PNPI Gatchina (Russian Federation); Chernogorov, A. [ITEP Moscow (Russian Federation); Deveaux, C. [University Gießen (Germany); Dobyrn, V. [PNPI Gatchina (Russian Federation); Dürr, M. [University Gießen (Germany); Eom, J. [Pusan National University (Korea, Republic of); Eschke, J. [GSI Darmstadt (Germany); Höhne, C. [University Gießen (Germany); Kampert, K.-H. [University Wuppertal (Germany); Kleipa, V. [GSI Darmstadt (Germany); Kochenda, L. [PNPI Gatchina (Russian Federation); Kolb, B. [GSI Darmstadt (Germany); Kopfer, J. [University Wuppertal (Germany); Kravtsov, P. [PNPI Gatchina (Russian Federation); Lebedev, S., E-mail: s.lebedev@gsi.de [University Gießen (Germany); Lebedeva, E. [University Gießen (Germany); Leonova, E. [PNPI Gatchina (Russian Federation); and others

    2014-12-01

    The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility will investigate the QCD phase diagram at high net-baryon densities and moderate temperatures. One of the key signatures will be di-leptons emitted from the hot and dense phase in heavy-ion collisions. Measuring di-electrons, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transition Radiation Detectors (TRD). In order to access the foreseen rare probes, the detector and the data acquisition have to handle interaction rates up to 10 MHz. Therefore, the development of fast and efficient event reconstruction algorithms is an important and challenging task in CBM. In this contribution event reconstruction and electron identification algorithms in the RICH detector are presented. So far they have been developed on simulated data but could already be tested on real data from a RICH prototype testbeam experiment at the CERN-PS. Efficient and fast ring recognition algorithms in the CBM-RICH are based on the Hough Transform method. Due to optical distortions of the rings, an ellipse fitting algorithm was elaborated to improve the ring radius resolution. An efficient algorithm based on the Artificial Neural Network was implemented for electron identification in RICH. All algorithms were significantly optimized to achieve maximum speed and minimum memory consumption. - Highlights: • Ring Imaging Cherenkov detector will serve for electron identification in CBM. • We present efficient ring recognition algorithm based on the Hough Transform method. • Developed algorithms were significantly optimized to achieve maximum speed up. • Electron identification algorithm in RICH based on the Artificial Neural Network. • Developed algorithms were successfully tested on real data from the RICH prototype.

  7. The J and P transformer book a practical technology of the power transformer

    CERN Document Server

    Franklin, Arthur Charles

    1983-01-01

    The J&P Transformer Book, 11th Edition deals with the design, installation, and maintenance of transformers. The book contains technical information, tables, calculations, diagrams, and illustrations based on information supplied by transformer manufacturers and related industries. It reviews fundamental transformer principles, the magnetic circuit, the characteristics of, and general types of transformers. The text contains tables showing the information that should be given to the transformer manufacturer to be used as a basis in preparing quotations. Transformer designs include three import

  8. Transforming the Way We Teach Function Transformations

    Science.gov (United States)

    Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.

    2010-01-01

    In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…

  9. The continous Legendre transform, its inverse transform, and applications

    Directory of Open Access Journals (Sweden)

    P. L. Butzer

    1980-01-01

    Full Text Available This paper is concerned with the continuous Legendre transform, derived from the classical discrete Legendre transform by replacing the Legendre polynomial Pk(x by the function Pλ(x with λ real. Another approach to T.M. MacRobert's inversion formula is found; for this purpose an inverse Legendre transform, mapping L1(ℝ+ into L2(−1,1, is defined. Its inversion in turn is naturally achieved by the continuous Legendre transform. One application is devoted to the Shannon sampling theorem in the Legendre frame together with a new type of error estimate. The other deals with a new representation of Legendre functions giving information about their behaviour near the point x=−1.

  10. Auditory ERB like admissible wavelet packet features for TIMIT phoneme recognition

    Directory of Open Access Journals (Sweden)

    P.K. Sahu

    2014-09-01

    Full Text Available In recent years wavelet transform has been found to be an effective tool for time–frequency analysis. Wavelet transform has been used as feature extraction in speech recognition applications and it has proved to be an effective technique for unvoiced phoneme classification. In this paper a new filter structure using admissible wavelet packet is analyzed for English phoneme recognition. These filters have the benefit of having frequency bands spacing similar to the auditory Equivalent Rectangular Bandwidth (ERB scale. Central frequencies of ERB scale are equally distributed along the frequency response of human cochlea. A new sets of features are derived using wavelet packet transform's multi-resolution capabilities and found to be better than conventional features for unvoiced phoneme problems. Some of the noises from NOISEX-92 database has been used for preparing the artificial noisy database to test the robustness of wavelet based features.

  11. Wavelet analysis

    CERN Document Server

    Cheng, Lizhi; Luo, Yong; Chen, Bo

    2014-01-01

    This book could be divided into two parts i.e. fundamental wavelet transform theory and method and some important applications of wavelet transform. In the first part, as preliminary knowledge, the Fourier analysis, inner product space, the characteristics of Haar functions, and concepts of multi-resolution analysis, are introduced followed by a description on how to construct wavelet functions both multi-band and multi wavelets, and finally introduces the design of integer wavelets via lifting schemes and its application to integer transform algorithm. In the second part, many applications are discussed in the field of image and signal processing by introducing other wavelet variants such as complex wavelets, ridgelets, and curvelets. Important application examples include image compression, image denoising/restoration, image enhancement, digital watermarking, numerical solution of partial differential equations, and solving ill-conditioned Toeplitz system. The book is intended for senior undergraduate stude...

  12. Hadamard Transforms

    CERN Document Server

    Agaian, Sos; Egiazarian, Karen; Astola, Jaakko

    2011-01-01

    The Hadamard matrix and Hadamard transform are fundamental problem-solving tools in a wide spectrum of scientific disciplines and technologies, such as communication systems, signal and image processing (signal representation, coding, filtering, recognition, and watermarking), digital logic (Boolean function analysis and synthesis), and fault-tolerant system design. Hadamard Transforms intends to bring together different topics concerning current developments in Hadamard matrices, transforms, and their applications. Each chapter begins with the basics of the theory, progresses to more advanced

  13. Generalized Fourier transforms classes

    DEFF Research Database (Denmark)

    Berntsen, Svend; Møller, Steen

    2002-01-01

    The Fourier class of integral transforms with kernels $B(\\omega r)$ has by definition inverse transforms with kernel $B(-\\omega r)$. The space of such transforms is explicitly constructed. A slightly more general class of generalized Fourier transforms are introduced. From the general theory foll...... follows that integral transform with kernels which are products of a Bessel and a Hankel function or which is of a certain general hypergeometric type have inverse transforms of the same structure....

  14. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    Science.gov (United States)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  15. Neural networks, cellular automata, and robust approach applications for vertex localization in the opera target tracker detector

    International Nuclear Information System (INIS)

    Dmitrievskij, S.G.; Gornushkin, Yu.A.; Ososkov, G.A.

    2005-01-01

    A neural-network (NN) approach for neutrino interaction vertex reconstruction in the OPERA experiment with the help of the Target Tracker (TT) detector is described. A feed-forward NN with the standard back propagation option is used. The energy functional minimization of the network is performed by the method of conjugate gradients. Data preprocessing by means of cellular automaton algorithm is performed. The Hough transform is applied for muon track determination and the robust fitting method is used for shower axis reconstruction. A comparison of the proposed approach with earlier studies, based on the use of the neural network package SNNS, shows their similar performance. The further development of the approach is underway

  16. Distributed Data Logging and Intelligent Control Strategies for a Scaled Autonomous Vehicle

    Directory of Open Access Journals (Sweden)

    Tilman Happek

    2016-04-01

    Full Text Available In this paper we present an autonomous car with distributed data processing. The car is controlled by a multitude of independent sensors. For the lane detection, a camera is used, which detects the lane marks with a Hough transformation. Once the camera detects these, one of them is calculated to be followed by the car. This lane is verified by the other sensors of the car. These sensors check the route for obstructions or allow the car to scan a parking space and to park on the roadside if the gap is large enough. The car is built on a scale of 1:10 and shows excellent results on a test track.

  17. Artificial intelligence tools for pattern recognition

    Science.gov (United States)

    Acevedo, Elena; Acevedo, Antonio; Felipe, Federico; Avilés, Pedro

    2017-06-01

    In this work, we present a system for pattern recognition that combines the power of genetic algorithms for solving problems and the efficiency of the morphological associative memories. We use a set of 48 tire prints divided into 8 brands of tires. The images have dimensions of 200 x 200 pixels. We applied Hough transform to obtain lines as main features. The number of lines obtained is 449. The genetic algorithm reduces the number of features to ten suitable lines that give thus the 100% of recognition. Morphological associative memories were used as evaluation function. The selection algorithms were Tournament and Roulette wheel. For reproduction, we applied one-point, two-point and uniform crossover.

  18. Transparent Model Transformation: Turning Your Favourite Model Editor into a Transformation Tool

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2015-01-01

    Current model transformation languages are supported by dedicated editors, often closely coupled to a single execution engine. We introduce Transparent Model Transformation, a paradigm enabling modelers to specify transformations using a familiar tool: their model editor. We also present VMTL, th...... model transformation tool sharing the model editor’s benefits, transparently....

  19. Nonsynchronous Noncommensurate Impedance Transformers

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Kim, K

    2012-01-01

    Nonsynchronous noncommensurate impedance transformers consist of a combination of two types of transmission lines: transmission lines with a characteristic impedance equal to the impedance of the source, and transmission lines with a characteristic impedance equal to the load. The practical...... advantage of such transformers is that they can be constructed using sections of transmission lines with a limited variety of characteristic impedances. These transformers also provide comparatively compact size in applications where a wide transformation ratio is required. This paper presents the data...... matrix approach and experimentally verified by synthesizing a 12-section nonsynchronous noncommensurate impedance transformer. The measured characteristics of the transformer are compared to the characteristics of a conventional tapered line transformer....

  20. Transformative environmental governance

    Science.gov (United States)

    Chaffin, Brian C.; Garmestani, Ahjond S.; Gunderson, Lance H.; Harm Benson, Melinda; Angeler, David G.; Arnold, Craig Anthony (Tony); Cosens, Barbara; Kundis Craig, Robin; Ruhl, J.B.; Allen, Craig R.

    2016-01-01

    Transformative governance is an approach to environmental governance that has the capacity to respond to, manage, and trigger regime shifts in coupled social-ecological systems (SESs) at multiple scales. The goal of transformative governance is to actively shift degraded SESs to alternative, more desirable, or more functional regimes by altering the structures and processes that define the system. Transformative governance is rooted in ecological theories to explain cross-scale dynamics in complex systems, as well as social theories of change, innovation, and technological transformation. Similar to adaptive governance, transformative governance involves a broad set of governance components, but requires additional capacity to foster new social-ecological regimes including increased risk tolerance, significant systemic investment, and restructured economies and power relations. Transformative governance has the potential to actively respond to regime shifts triggered by climate change, and thus future research should focus on identifying system drivers and leading indicators associated with social-ecological thresholds.

  1. Heat transfer comparison of nanofluid filled transformer and traditional oil-immersed transformer

    Science.gov (United States)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    Dispersing nanoparticles with high thermal conductivity into transformer oil is an innovative approach to improve the thermal performance of traditional oil-immersed transformers. This mixture, also known as nanofluid, has shown the potential in practical application through experimental measurements. This paper presents the comparisons of nanofluid filled transformer and traditional oil-immersed transformer in terms of their computational fluid dynamics (CFD) solutions from the perspective of optimal design. Thermal performance of transformers with the same parameters except coolants is compared. A further comparison on heat transfer then is made after minimizing the oil volume and maximum temperature-rise of these two transformers. Adaptive multi-objective optimization method is employed to tackle this optimization problem.

  2. The convolution transform

    CERN Document Server

    Hirschman, Isidore Isaac

    2005-01-01

    In studies of general operators of the same nature, general convolution transforms are immediately encountered as the objects of inversion. The relation between differential operators and integral transforms is the basic theme of this work, which is geared toward upper-level undergraduates and graduate students. It may be read easily by anyone with a working knowledge of real and complex variable theory. Topics include the finite and non-finite kernels, variation diminishing transforms, asymptotic behavior of kernels, real inversion theory, representation theory, the Weierstrass transform, and

  3. The transformativity approach

    DEFF Research Database (Denmark)

    Holm, Isak Winkel; Lauta, Kristian Cedervall

    2017-01-01

    During the last five to ten years, a considerable body of research has begun to explore how disasters, real and imagined, trigger social transformations. Even if the contributions to this this research stems from a multitude of academic disciplines, we argue in the article, they constitute...... an identifiable and promising approach for future disaster research. We suggest naming it the transformativity approach. Whereas the vulnerability approach explores the social causation of disasters, the transformativity approach reverses the direction of the gaze and investigates the social transformation...... brought about by disasters. Put simply, the concept of vulnerability is about the upstream causes of disaster and the concept of transformativity about the downstream effects. By discussing three recent contributions (by the historian Greg Bankoff, the legal sociologist Michelle Dauber...

  4. Defense Business Transformation

    Science.gov (United States)

    2009-12-01

    Defense Business Transformation by Jacques S. Gansler and William Lucyshyn The Center for Technology and National...REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Defense Business Transformation 5a. CONTRACT NUMBER 5b. GRANT NUMBER...vii Part One: DoD Business Transformation

  5. Generalized Fourier transforms classes

    DEFF Research Database (Denmark)

    Berntsen, Svend; Møller, Steen

    2002-01-01

    The Fourier class of integral transforms with kernels $B(\\omega r)$ has by definition inverse transforms with kernel $B(-\\omega r)$. The space of such transforms is explicitly constructed. A slightly more general class of generalized Fourier transforms are introduced. From the general theory...

  6. Three dimensional canonical transformations

    International Nuclear Information System (INIS)

    Tegmen, A.

    2010-01-01

    A generic construction of canonical transformations is given in three-dimensional phase spaces on which Nambu bracket is imposed. First, the canonical transformations are defined as based on cannonade transformations. Second, it is shown that determination of the generating functions and the transformation itself for given generating function is possible by solving correspondent Pfaffian differential equations. Generating functions of type are introduced and all of them are listed. Infinitesimal canonical transformations are also discussed as the complementary subject. Finally, it is shown that decomposition of canonical transformations is also possible in three-dimensional phase spaces as in the usual two-dimensional ones.

  7. Improving Your Data Transformations: Applying the Box-Cox Transformation

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2010-10-01

    Full Text Available Many of us in the social sciences deal with data that do not conform to assumptions of normality and/or homoscedasticity/homogeneity of variance. Some research has shown that parametric tests (e.g., multiple regression, ANOVA can be robust to modest violations of these assumptions. Yet the reality is that almost all analyses (even nonparametric tests benefit from improved the normality of variables, particularly where substantial non-normality is present. While many are familiar with select traditional transformations (e.g., square root, log, inverse for improving normality, the Box-Cox transformation (Box & Cox, 1964 represents a family of power transformations that incorporates and extends the traditional options to help researchers easily find the optimal normalizing transformation for each variable. As such, Box-Cox represents a potential best practice where normalizing data or equalizing variance is desired. This paper briefly presents an overview of traditional normalizing transformations and how Box-Cox incorporates, extends, and improves on these traditional approaches to normalizing data. Examples of applications are presented, and details of how to automate and use this technique in SPSS and SAS are included.

  8. Transformation of UML models to CSP : a case study for graph transformation tools

    NARCIS (Netherlands)

    Varró, D.; Asztalos, M.; Bisztray, D.; Boronat, A.; Dang, D.; Geiß, R.; Greenyer, J.; Van Gorp, P.M.E.; Kniemeyer, O.; Narayanan, A.; Rencis, E.; Weinell, E.; Schürr, A.; Nagl, M.; Zündorf, A.

    2008-01-01

    Graph transformation provides an intuitive mechanism for capturing model transformations. In the current paper, we investigate and compare various graph transformation tools using a compact practical model transformation case study carried out as part of the AGTIVE 2007 Tool Contest [22]. The aim of

  9. A Method for the Monthly Electricity Demand Forecasting in Colombia based on Wavelet Analysis and a Nonlinear Autoregressive Model

    Directory of Open Access Journals (Sweden)

    Cristhian Moreno-Chaparro

    2011-12-01

    Full Text Available This paper proposes a monthly electricity forecast method for the National Interconnected System (SIN of Colombia. The method preprocesses the time series using a Multiresolution Analysis (MRA with Discrete Wavelet Transform (DWT; a study for the selection of the mother wavelet and her order, as well as the level decomposition was carried out. Given that original series follows a non-linear behaviour, a neural nonlinear autoregressive (NAR model was used. The prediction was obtained by adding the forecast trend with the estimated obtained by the residual series combined with further components extracted from preprocessing. A bibliographic review of studies conducted internationally and in Colombia is included, in addition to references to investigations made with wavelet transform applied to electric energy prediction and studies reporting the use of NAR in prediction.

  10. Image encryption with chaotic map and Arnold transform in the gyrator transform domains

    Science.gov (United States)

    Sang, Jun; Luo, Hongling; Zhao, Jun; Alam, Mohammad S.; Cai, Bin

    2017-05-01

    An image encryption method combing chaotic map and Arnold transform in the gyrator transform domains was proposed. Firstly, the original secret image is XOR-ed with a random binary sequence generated by a logistic map. Then, the gyrator transform is performed. Finally, the amplitude and phase of the gyrator transform are permutated by Arnold transform. The decryption procedure is the inverse operation of encryption. The secret keys used in the proposed method include the control parameter and the initial value of the logistic map, the rotation angle of the gyrator transform, and the transform number of the Arnold transform. Therefore, the key space is large, while the key data volume is small. The numerical simulation was conducted to demonstrate the effectiveness of the proposed method and the security analysis was performed in terms of the histogram of the encrypted image, the sensitiveness to the secret keys, decryption upon ciphertext loss, and resistance to the chosen-plaintext attack.

  11. Relation between catalyst-assisted transformation and multiple-copy transformation for bipartite pure states

    International Nuclear Information System (INIS)

    Feng Yuan; Duan Runyao; Ying Mingsheng

    2006-01-01

    We show that in some cases, catalyst-assisted entanglement transformation cannot be implemented by multiple-copy transformation for pure states. This fact, together with the result we obtained in R. Y. Duan, Y. Feng, X. Li, and M. S. Ying, Phys. Rev. A 71, 042319 (2005), namely that the latter can be completely implemented by the former, indicates that catalyst-assisted transformation is strictly more powerful than multiple-copy transformation. For the purely probabilistic setting we find, however, these two kinds of transformations are geometrically equivalent in the sense that the sets of pure states that can be converted into a given pure state with maximal probabilities not less than a given value have the same closure, regardless of whether catalyst-assisted transformation or multiple-copy transformation is used

  12. Disc piezoelectric ceramic transformers.

    Science.gov (United States)

    Erhart, Jirií; Půlpán, Petr; Doleček, Roman; Psota, Pavel; Lédl, Vít

    2013-08-01

    In this contribution, we present our study on disc-shaped and homogeneously poled piezoelectric ceramic transformers working in planar-extensional vibration modes. Transformers are designed with electrodes divided into wedge, axisymmetrical ring-dot, moonie, smile, or yin-yang segments. Transformation ratio, efficiency, and input and output impedances were measured for low-power signals. Transformer efficiency and transformation ratio were measured as a function of frequency and impedance load in the secondary circuit. Optimum impedance for the maximum efficiency has been found. Maximum efficiency and no-load transformation ratio can reach almost 100% and 52 for the fundamental resonance of ring-dot transformers and 98% and 67 for the second resonance of 2-segment wedge transformers. Maximum efficiency was reached at optimum impedance, which is in the range from 500 Ω to 10 kΩ, depending on the electrode pattern and size. Fundamental vibration mode and its overtones were further studied using frequency-modulated digital holographic interferometry and by the finite element method. Complementary information has been obtained by the infrared camera visualization of surface temperature profiles at higher driving power.

  13. RetroTransformDB: A Dataset of Generic Transforms for Retrosynthetic Analysis

    Directory of Open Access Journals (Sweden)

    Svetlana Avramova

    2018-04-01

    Full Text Available Presently, software tools for retrosynthetic analysis are widely used by organic, medicinal, and computational chemists. Rule-based systems extensively use collections of retro-reactions (transforms. While there are many public datasets with reactions in synthetic direction (usually non-generic reactions, there are no publicly-available databases with generic reactions in computer-readable format which can be used for the purposes of retrosynthetic analysis. Here we present RetroTransformDB—a dataset of transforms, compiled and coded in SMIRKS line notation by us. The collection is comprised of more than 100 records, with each one including the reaction name, SMIRKS linear notation, the functional group to be obtained, and the transform type classification. All SMIRKS transforms were tested syntactically, semantically, and from a chemical point of view in different software platforms. The overall dataset design and the retrosynthetic fitness were analyzed and curated by organic chemistry experts. The RetroTransformDB dataset may be used by open-source and commercial software packages, as well as chemoinformatics tools.

  14. Study of allotropic transformations in plutonium; Etude des transformations allotropiques du plutonium

    Energy Technology Data Exchange (ETDEWEB)

    Spriet, B [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1966-06-01

    The allotropic transformations in plutonium have been studied by different methods: metallography, dilatometry, thermal analysis, resistivity measurements, examination with a hot stage microscope. In order to study the importance of the purity, purification process such as zone-melting or electro-diffusion have been developed. The characteristics of the {alpha} {r_reversible} {beta} transformation can be explained in terms of the influence of internal stresses on the transition temperature and on the transformation kinetics. Some particular characteristics of {delta} {yields} {gamma}, {gamma} {yields} {alpha}, {delta} {r_reversible} {epsilon}, {beta} {r_reversible} {gamma} and {delta} {yields} {alpha} transformations are also given. (author) [French] Les transformations, allotropiques du plutonium ont ete etudiees a l'aide de differentes methodes: metallographie, dilatometrie, analyse thermique, mesure de resistivite, examen au microscope a platine chauffante. Pour preciser l'influence de la purete, des procedes de purification comme la fusion de zone ou l'electrodiffusion ont ete mis au point. Les caracteres de la transformation {alpha} {r_reversible} {beta} s'expliquent par le role des contraintes internes sur la temperature de transition et la cinetique de transformation. Quelques particularites des transformations {delta} {yields} {gamma}, {gamma} {yields} {alpha}, {delta} {r_reversible} {epsilon}, {beta} {r_reversible} {gamma} et {delta} {yields} {alpha} sont egalement presentees. (auteur)

  15. TRANSFORMATIONAL LEADERSHIP – THE ART OF SUCCESSFULLY MANAGING TRANSFORMATIONAL ORGANIZATIONAL CHANGES

    Directory of Open Access Journals (Sweden)

    Ionel Scaunasu

    2012-12-01

    Full Text Available Companies today face new challenges waves, striving to remain competitive in a rapidly-changing market. Transformational leadership is a strategic key approach to successfully managing organizational transformational changes, the art of boat business leads by turbulence beginning of the XXI century. In fact, this is a new quality of leadership that so-called transactional management blew it in an attempt to end the cycle of conversion efficiency. In this sense, the success of transformational organizational change management involving key people in an organization (managers to develop appropriate skill sets and attributes that characterize the so-called transformational leaders.

  16. Transformer design principles with applications to core-form power transformers

    CERN Document Server

    Del Vecchio, Robert M; Feeney, Mary-Ellen F

    2001-01-01

    Transformer Design Principles presents the theory of transformer operation and the methods and techniques of designing them. It emphasizes the physical principles and mathematical tools for simulating transformer behavior, including modern computer techniques. The scope of the book includes types of construction, circuit analysis, mechanical aspects of design, high voltage insulation requirements, and cooling design. The authors also address test procedures and reliability methods to assure successful design and discuss the economic analysis of designs. Summarizing material currently scattered

  17. Fractional finite Fourier transform.

    Science.gov (United States)

    Khare, Kedar; George, Nicholas

    2004-07-01

    We show that a fractional version of the finite Fourier transform may be defined by using prolate spheroidal wave functions of order zero. The transform is linear and additive in its index and asymptotically goes over to Namias's definition of the fractional Fourier transform. As a special case of this definition, it is shown that the finite Fourier transform may be inverted by using information over a finite range of frequencies in Fourier space, the inversion being sensitive to noise. Numerical illustrations for both forward (fractional) and inverse finite transforms are provided.

  18. Annual Variation and Global Structures of The DE3 Tide

    International Nuclear Information System (INIS)

    Ze-Yu, Chen; Da-Ren, Lu

    2008-01-01

    The SABER/TIMED temperatures taken in 2002–2006 are used to delineate the tidal signals in the middle and upper atmosphere. Then the Hough mode decomposition is applied with the DE3 tide, and the overall features of the seasonal variations and the complete global structures of the tide are observed. Investigation results show that the tide is most prominent at 110 km with the maximal amplitude of > 9K, and exhibits significant seasonal variation with its maximum amplitude always occurring in July every year. Results from the Hough mode decomposition reveal that the tide is composed primarily of two leading propagating Hough modes, i.e., the (−3,3) and the (−3,4) modes, thus is equatorially trapped. Estimation of the mean amplitude of the Hough modes show that the (−3,3) mode and (−3,4) mode exhibit maxima at 110km and 90 km, respectively. The (−3,3) mode plays a predominant role in shaping the global latitude-height structure of the tide, e.g., the vertical scale of > 50km at the equator, and the annual course. Significant influence of the (−3,4) mode is found below 90km, where the tide exhibits anti-symmetric structure about the equator; meanwhile the tide at northern tropical latitudes exhibits smaller vertical wavelength of about 30 km. (geophysics, astronomy, and astrophysics)

  19. Using the transformer oil-based nanofluid for cooling of power distribution transformer

    OpenAIRE

    Mushtaq Ismael Hasan

    2017-01-01

    Thermal behavior of electrical distribution transformer has been numerically studied with the effect of surrounding air temperature. 250 KVA distribution transformer is chosen as a study model and studied in temperature range cover the weather conditions of hot places. Transformer oil-based nanofluids were used as a cooling medium instead of pure transformer oil. Four types of solid particles (Cu, Al2O3, TiO2 and SiC) were used to compose nanofluids with volume fractions (1%, 3%, 5%, 7%, and ...

  20. [A preliminary research on multi-source medical image fusion].

    Science.gov (United States)

    Kang, Yuanyuan; Li, Bin; Tian, Lianfang; Mao, Zongyuan

    2009-04-01

    Multi-modal medical image fusion has important value in clinical diagnosis and treatment. In this paper, the multi-resolution analysis of Daubechies 9/7 Biorthogonal Wavelet Transform is introduced for anatomical and functional image fusion, then a new fusion algorithm with the combination of local standard deviation and energy as texture measurement is presented. At last, a set of quantitative evaluation criteria is given. Experiments show that both anatomical and metabolism information can be obtained effectively, and both the edge and texture features can be reserved successfully. The presented algorithm is more effective than the traditional algorithms.

  1. Search for critical phenomena in Pb - Pb collisions

    CERN Document Server

    Kopytine, Mikhail L.; Boggild, H.; Boissevain, J.; Conin, L.; Dodd, J.; Erazmus, B.; Esumi, S.; Fabjan, C.W.; Ferenc, D.; Fields, D.E.; Franz, A.; Gaardhoje, J.J.; Hansen, A.G.; Hansen, O.; Hardtke, D.; Van Hecke, H.; Holzer, E.B.; Humanic, T.J.; Hummel, P.; Jacak, B.V.; Jayanti, R.; Kaimi, K.; Kaneta, M.; Kohama, T.; Leltchouk, M.; Ljubicic, A., Jr.; Lorstad, B.; Maeda, N.; Martin, L.; Medvedev, A.; Murray, M.; Ohnishi, H.; Paic, G.; Pandey, S.U.; Piuz, F.; Pluta, J.; Polychronakos, V.; Potekhin, M.; Poulard, G.; Reichhold, D.; Sakaguchi, A.; Schmidt-Sorensen, J.; Simon-Gillo, J.; Sondheim, W.; Sugitate, T.; Sullivan, J.P.; Sumi, Y.; Willis, W.J.; Wolf, K.L.; Xu, N.; Zachary, D.S.; Kopytine, Mikhail

    2001-01-01

    NA44 uses a 512 channel Si pad array covering $1.5 <\\eta < 3.3$ to study charged hadron production in Pb+Pb collisions at the CERN SPS. We apply a multiresolution analysis, based on a Discrete Wavelet Transformation, to probe the texture of particle distributions event-by-event, by simultaneous localization of features in space and scale. Scanning a broad range of multiplicities, we look for a possible critical behaviour in the power spectra of local density fluctuations. The data are compared with detailed simulations of detector response, using heavy ion event generators, and with a reference sample created via event mixing.

  2. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    Science.gov (United States)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  3. Quantized Bogoliubov transformations

    International Nuclear Information System (INIS)

    Geyer, H.B.

    1984-01-01

    The boson mapping of single fermion operators in a situation dominated by the pairing force gives rise to a transformation that can be considered a quantized version of the Bogoliubov transformation. This transformation can also be obtained as an exact special case of operators constructed from an approximate treatment of particle number projection, suggesting a method of obtaining the boson mapping in cases more complicated than that of pairing force domination

  4. Transformation on Abandonment

    DEFF Research Database (Denmark)

    Krag, Mo Michelsen Stochholm

    2016-01-01

    This paper outlines a research project on the increasing quantity of abandoned houses in the depopulating rural villages, and it reports on how an attempt is made to establish a counter-practice of radical preservation based on a series of full-scale transformations of abandoned buildings. The aim...... of the transformations is to reveal and preserve material and immaterial values such as aspects of cultural heritage, local narratives, and building density. The responses of local people are used as a feedback mechanism and considered an important impact indicator. Eleven transformations of varying strategies have...... houses. Transformation prototypes are tested as present manifestations in rural villages as an alternative way to preserve buildings as well as memories....

  5. Entropy of Baker's Transformation

    Institute of Scientific and Technical Information of China (English)

    栾长福

    2003-01-01

    Four theorems about four different kinds of entropies for Baker's transformation are presented. The Kolmogorov entropy of Baker's transformation is sensitive to the initial flips by the time. The topological entropy of Baker's transformation is found to be log k. The conditions for the state of Baker's transformation to be forbidden are also derived. The relations among the Shanonn, Kolmogorov, topological and Boltzmann entropies are discussed in details.

  6. Kinetics of first order phase transformation in metals and alloys. Isothermal evolution in martensite transformation

    International Nuclear Information System (INIS)

    Iwasaki, Hiroshi; Ohshima, Ken-ichi

    2011-01-01

    The 11th lecture about microstructures and fluctuation in solids reports on the martensitic phase transformation of alkali metals and alloys. The martensitic transformation is a diffusionless first order phase transformation. Martensitic transformations are classified into two with respect to kinetics, one is isothermal transformation and the other is athermal transformation. The former transformation depends upon both temperature and time, but the latter solely depends on temperature. The former does not have a definite transformation start temperature but occurs after some finite incubation time during isothermal holding. The isothermal martensitic transformation is changed to the athermal one under high magnetic field, and also the reverse transformation occurs under the application of hydrostatic pressure. The former phenomena were observed in Fe-Ni-Mn alloys, Fe-Ni-Cr alloys and also the reverse transformation in Fe-3.1at%Ni-0.5at%Mn alloys. The athermal transformation was observed in Li and Na metals at 73 and 36 K, respectively. A neutron diffraction study has been performed on single crystals of metallic Na. On cooling the virgin sample, the incubation time to transform from the bcc structure to the low-temperature structure (9R structure) is formed to be more than 2h at 38 K, 2 K higher than the transformation temperature of 36 K. The full width of half maximum of the Bragg reflection suddenly increased, due to some deformation introduced by the nucleation of the low-temperature structure. In relation to the deformation, strong extra-diffuse scattering (Huang scattering) was observed around the Bragg reflection in addition to thermal diffuse scattering. The kinetics of the martensitic transformation in In-Tl alloys has been studied by x-ray and neutron diffraction methods. A characteristic incubation time appeared at fixed temperature above Ms, the normal martensitic transformation start temperature. (author)

  7. Transformers analysis, design, and measurement

    CERN Document Server

    Lopez-Fernandez, Xose M; Turowski, Janusz

    2012-01-01

    This book focuses on contemporary economic, design, diagnostics, and maintenance aspects of power, instrument, and high frequency transformers, which are critical to designers for a transformer stations. The text covers such topics as shell type and superconducting transformers as well as coreless PCB and planar transformers. It emphasizes challenges and strategies in transformer design and illustrates the importance of economics in transformers management by reviewing life cycle cost design and the use of decision methods to manage risk.

  8. Conduction-coupled Tesla transformer.

    Science.gov (United States)

    Reed, J L

    2015-03-01

    A proof-of-principle Tesla transformer circuit is introduced. The new transformer exhibits the high voltage-high power output signal of shock-excited transformers. The circuit, with specification of proper circuit element values, is capable of obtaining extreme oscillatory voltages. The primary and secondary portions of the circuit communicate solely by conduction. The destructive arcing between the primary and secondary inductors in electromagnetically coupled transformers is ubiquitous. Flashover is eliminated in the new transformer as the high-voltage inductors do not interpenetrate and so do not possess an annular volume of electric field. The inductors are remote from one another. The high voltage secondary inductor is isolated in space, except for a base feed conductor, and obtains earth by its self-capacitance to the surroundings. Governing equations, for the ideal case of no damping, are developed from first principles. Experimental, theoretical, and circuit simulator data are presented for the new transformer. Commercial high-temperature superconductors are discussed as a means to eliminate the counter-intuitive damping due to small primary inductances in both the electromagnetic-coupled and new conduction-coupled transformers.

  9. Integral transformations applied to image encryption

    International Nuclear Information System (INIS)

    Vilardy, Juan M.; Torres, Cesar O.; Perez, Ronal

    2017-01-01

    In this paper we consider the application of the integral transformations for image encryption through optical systems, a mathematical algorithm under Matlab platform using fractional Fourier transform (FrFT) and Random Phase Mask (RPM) for digital images encryption is implemented. The FrFT can be related to others integral transforms, such as: Fourier transform, Sine and Cosine transforms, Radial Hilbert transform, fractional Sine transform, fractional Cosine transform, fractional Hartley transform, fractional Wavelet transform and Gyrator transform, among other transforms. The encryption scheme is based on the use of the FrFT, the joint transform correlator and two RPMs, which provide security and robustness to the implemented security system. One of the RPMs used during encryption-decryption and the fractional order of the FrFT are the keys to improve security and make the system more resistant against security attacks. (paper)

  10. Thin-Film Power Transformers

    Science.gov (United States)

    Katti, Romney R.

    1995-01-01

    Transformer core made of thin layers of insulating material interspersed with thin layers of ferromagnetic material. Flux-linking conductors made of thinner nonferromagnetic-conductor/insulator multilayers wrapped around core. Transformers have geometric features finer than those of transformers made in customary way by machining and mechanical pressing. In addition, some thin-film materials exhibit magnetic-flux-carrying capabilities superior to those of customary bulk transformer materials. Suitable for low-cost, high-yield mass production.

  11. Transformation language integration based on profiles and higher order transformations

    NARCIS (Netherlands)

    Van Gorp, P.M.E.; Keller, A.; Janssens, D.; Gaševic, D.; Lämmel, R.; Van Wyk, Eric

    2009-01-01

    For about two decades, researchers have been constructing tools for applying graph transformations on large model transformation case studies. Instead of incrementally extending a common core, these competitive tool builders have repeatedly reconstructed mechanisms that were already supported by

  12. Electrostatic shielding of transformers

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Francisco

    2017-11-28

    Toroidal transformers are currently used only in low-voltage applications. There is no published experience for toroidal transformer design at distribution-level voltages. Toroidal transformers are provided with electrostatic shielding to make possible high voltage applications and withstand the impulse test.

  13. Computerized detection of pneumothorax on digital chest radiographs

    International Nuclear Information System (INIS)

    Sanada, S.; Doi, K.; MacMahon, H.; Montner, S.M.

    1990-01-01

    This paper reports on neumothoraces that are clinically important abnormalities that usually appear as a subtle, fine line pattern on chest radiographs. We are developing a computer vision system for automated detection of pneumothorax to aid radiologists diagnosis. Chest images were digitized with a 0.175-mm pixel size, yielding a 2,000 x 2,430 matrix size, and 10 bits of gray scale. After indentification of the lung regions, an edge detection filter was employed in the apical areas to enhance a pneumothorax pattern. Ribs were detected with a technique based on statistical analysis of edge gradients and their orientations. Points located on a curved line suggestive of a pneumothorax in this enhanced image were detected with a Hough transform

  14. A Container Horizontal Positioning Method with Image Sensors for Cranes in Automated Container Terminals

    Directory of Open Access Journals (Sweden)

    FU Yonghua

    2014-03-01

    Full Text Available Automation is a trend for large container terminals nowadays, and container positioning techniques are key factor in the automating process. Vision based positioning techniques are inexpensive and rather accurate in nature, while the effect with insufficient illumination is left in question. This paper proposed a vision-based procedure with image sensors to determine the position of one container in the horizontal plane. The points found by the edge detection operator are clustered, and only the peak points in the parameter space of the Hough transformation is selected, in order that the effect of noises could be much decreased. The effectiveness of our procedure is verified in experiments, in which the efficiency of the procedure is also investigated.

  15. Visual Servoing of Mobile Microrobot with Centralized Camera

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2018-01-01

    Full Text Available In this paper, a mechanism of visual servoing for mobile microrobot with a centralized camera is developed. Especially for the development of swarm AI applications. In the fields of microrobots the size of robots is minimal and the amount of movement is also small. By replacing various sensors that is needed with a single centralized vision sensor we can eliminate a lot of components and the need for calibration on every robot. A study and design for a visual servoing mobile microrobot has been developed. This system can use multi object tracking and hough transform to identify the positions of the robots. And can control multiple robots at once with an accuracy of 5-6 pixel from the desired target.

  16. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  17. Pattern recognition with vector hits

    International Nuclear Information System (INIS)

    Frühwirth, R

    2012-01-01

    Trackers at the future high-luminosity LHC, designed to have triggering capability, will feature layers of stacked modules with a small stack separation. This will allow the reconstruction of track stubs or vector hits with position and direction information, but lacking precise curvature information. This opens up new possibilities for track finding, online and offline. Two track finding methods, the Kalman filter and the convergent Hough transform are studied in this context. Results from a simplified fast simulation are presented. It is shown that the performance of the methods depends to a large extent on the size of the stack separation. We conclude that the detector design and the choice of the track finding algorithm(s) are strongly coupled and should proceed conjointly.

  18. Restoration of high-resolution AFM images captured with broken probes

    Science.gov (United States)

    Wang, Y. F.; Corrigan, D.; Forman, C.; Jarvis, S.; Kokaram, A.

    2012-03-01

    A type of artefact is induced by damage of the scanning probe when the Atomic Force Microscope (AFM) captures a material surface structure with nanoscale resolution. This artefact has a dramatic form of distortion rather than the traditional blurring artefacts. Practically, it is not easy to prevent the damage of the scanning probe. However, by using natural image deblurring techniques in image processing domain, a comparatively reliable estimation of the real sample surface structure can be generated. This paper introduces a novel Hough Transform technique as well as a Bayesian deblurring algorithm to remove this type of artefact. The deblurring result is successful at removing blur artefacts in the AFM artefact images. And the details of the fibril surface topography are well preserved.

  19. Logarithmic Transformations in Regression: Do You Transform Back Correctly?

    Science.gov (United States)

    Dambolena, Ismael G.; Eriksen, Steven E.; Kopcso, David P.

    2009-01-01

    The logarithmic transformation is often used in regression analysis for a variety of purposes such as the linearization of a nonlinear relationship between two or more variables. We have noticed that when this transformation is applied to the response variable, the computation of the point estimate of the conditional mean of the original response…

  20. A new perspective on relativistic transformation: formulation of the differential Lorentz transformation based on first principles

    International Nuclear Information System (INIS)

    Huang, Young-Sea

    2010-01-01

    The differential Lorentz transformation is formulated solely from the principle of relativity and the invariance of the speed of light. The differential Lorentz transformation transforms physical quantities, instead of space-time coordinates, to keep laws of nature form-invariant among inertial frames. The new relativistic transformation fulfills the principle of relativity, whereas the usual Lorentz transformation of space-time coordinates does not. Furthermore, the new relativistic transformation is compatible with quantum mechanics. The formulation herein provides theoretical foundations for the differential Lorentz transformation as the fundamental relativistic transformation.

  1. Correspondence between quantum-optical transform and classical-optical transform explored by developing Dirac's symbolic method

    Science.gov (United States)

    Fan, Hong-yi; Hu, Li-yun

    2012-06-01

    By virtue of the new technique of performing integration over Dirac's ket-bra operators, we explore quantum optical version of classical optical transformations such as optical Fresnel transform, Hankel transform, fractional Fourier transform, Wigner transform, wavelet transform and Fresnel-Hadmard combinatorial transform etc. In this way one may gain benefit for developing classical optics theory from the research in quantum optics, or vice-versa. We cannot only find some new quantum mechanical unitary operators which correspond to the known optical transformations, deriving a new theorem for calculating quantum tomogram of density operators, but also can reveal some new classical optical transformations. For examples, we find the generalized Fresnel operator (GFO) to correspond to the generalized Fresnel transform (GFT) in classical optics. We derive GFO's normal product form and its canonical coherent state representation and find that GFO is the loyal representation of symplectic group multiplication rule. We show that GFT is just the transformation matrix element of GFO in the coordinate representation such that two successive GFTs is still a GFT. The ABCD rule of the Gaussian beam propagation is directly demonstrated in the context of quantum optics. Especially, the introduction of quantum mechanical entangled state representations opens up a new area in finding new classical optical transformations. The complex wavelet transform and the condition of mother wavelet are studied in the context of quantum optics too. Throughout our discussions, the coherent state, the entangled state representation of the two-mode squeezing operators and the technique of integration within an ordered product (IWOP) of operators are fully used. All these have confirmed Dirac's assertion: "...for a quantum dynamic system that has a classical analogue, unitary transformation in the quantum theory is the analogue of contact transformation in the classical theory".

  2. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    Directory of Open Access Journals (Sweden)

    Hamza Alzarok

    2017-01-01

    Full Text Available The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT. Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the

  3. A vision-based automated guided vehicle system with marker recognition for indoor use.

    Science.gov (United States)

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  4. Glossary of terms used in image processing for radiologists

    International Nuclear Information System (INIS)

    Rolland, Y.; Ramee, A.; Morcet, N.; Collorec, R.; Bruno, A.; Haigron, P.

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distortion distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, Laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moire, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, Spline, split (to), shape, threshold, tree, signal, speckle, spectrum, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing. (Authors). 41 figs

  5. Short-term wind speed prediction based on the wavelet transformation and Adaboost neural network

    Science.gov (United States)

    Hai, Zhou; Xiang, Zhu; Haijian, Shao; Ji, Wu

    2018-03-01

    The operation of the power grid will be affected inevitably with the increasing scale of wind farm due to the inherent randomness and uncertainty, so the accurate wind speed forecasting is critical for the stability of the grid operation. Typically, the traditional forecasting method does not take into account the frequency characteristics of wind speed, which cannot reflect the nature of the wind speed signal changes result from the low generality ability of the model structure. AdaBoost neural network in combination with the multi-resolution and multi-scale decomposition of wind speed is proposed to design the model structure in order to improve the forecasting accuracy and generality ability. The experimental evaluation using the data from a real wind farm in Jiangsu province is given to demonstrate the proposed strategy can improve the robust and accuracy of the forecasted variable.

  6. Stepwise transformation behavior of the strain-induced martensitic transformation in a metastable stainless steel

    International Nuclear Information System (INIS)

    Hedstroem, Peter; Lienert, Ulrich; Almer, Jon; Oden, Magnus

    2007-01-01

    In situ high-energy X-ray diffraction during tensile loading has been used to investigate the evolution of lattice strains and the accompanying strain-induced martensitic transformation in cold-rolled sheets of a metastable stainless steel. At high applied strains the transformation to α-martensite occurs in stepwise bursts. These stepwise transformation events are correlated with stepwise increased lattice strains and peak broadening in the austenite phase. The stepwise transformation arises from growth of α-martensite embryos by autocatalytic transformation

  7. Optimal oscillation-center transformations

    International Nuclear Information System (INIS)

    Dewar, R.L.

    1984-08-01

    A variational principle is proposed for defining that canonical transformation, continuously connected with the identity transformation, which minimizes the residual, coordinate-dependent part of the new Hamiltonian. The principle is based on minimization of the mean-square generalized force. The transformation reduces to the action-angle transformation in that part of the phase space of an integrable system where the orbit topology is that of the unperturbed system, or on primary KAM surfaces. General arguments in favor of this definition are given, based on Galilean invariance, decay of the Fourier spectrum, and its ability to include external fields or inhomogeneous systems. The optimal oscillation-center transformation for the physical pendulum, or particle in a sinusoidal potential, is constructed

  8. High pressure phase transformations revisited

    Science.gov (United States)

    Levitas, Valery I.

    2018-04-01

    High pressure phase transformations play an important role in the search for new materials and material synthesis, as well as in geophysics. However, they are poorly characterized, and phase transformation pressure and pressure hysteresis vary drastically in experiments of different researchers, with different pressure transmitting media, and with different material suppliers. Here we review the current state, challenges in studying phase transformations under high pressure, and the possible ways in overcoming the challenges. This field is critically compared with fields of phase transformations under normal pressure in steels and shape memory alloys, as well as plastic deformation of materials. The main reason for the above mentioned discrepancy is the lack of understanding that there is a fundamental difference between pressure-induced transformations under hydrostatic conditions, stress-induced transformations under nonhydrostatic conditions below yield, and strain-induced transformations during plastic flow. Each of these types of transformations has different mechanisms and requires a completely different thermodynamic and kinetic description and experimental characterization. In comparison with other fields the following challenges are indicated for high pressure phase transformation: (a) initial and evolving microstructure is not included in characterization of transformations; (b) continuum theory is poorly developed; (c) heterogeneous stress and strain fields in experiments are not determined, which leads to confusing material transformational properties with a system behavior. Some ways to advance the field of high pressure phase transformations are suggested. The key points are: (a) to take into account plastic deformations and microstructure evolution during transformations; (b) to formulate phase transformation criteria and kinetic equations in terms of stress and plastic strain tensors (instead of pressure alone); (c) to develop multiscale continuum

  9. High pressure phase transformations revisited.

    Science.gov (United States)

    Levitas, Valery I

    2018-04-25

    High pressure phase transformations play an important role in the search for new materials and material synthesis, as well as in geophysics. However, they are poorly characterized, and phase transformation pressure and pressure hysteresis vary drastically in experiments of different researchers, with different pressure transmitting media, and with different material suppliers. Here we review the current state, challenges in studying phase transformations under high pressure, and the possible ways in overcoming the challenges. This field is critically compared with fields of phase transformations under normal pressure in steels and shape memory alloys, as well as plastic deformation of materials. The main reason for the above mentioned discrepancy is the lack of understanding that there is a fundamental difference between pressure-induced transformations under hydrostatic conditions, stress-induced transformations under nonhydrostatic conditions below yield, and strain-induced transformations during plastic flow. Each of these types of transformations has different mechanisms and requires a completely different thermodynamic and kinetic description and experimental characterization. In comparison with other fields the following challenges are indicated for high pressure phase transformation: (a) initial and evolving microstructure is not included in characterization of transformations; (b) continuum theory is poorly developed; (c) heterogeneous stress and strain fields in experiments are not determined, which leads to confusing material transformational properties with a system behavior. Some ways to advance the field of high pressure phase transformations are suggested. The key points are: (a) to take into account plastic deformations and microstructure evolution during transformations; (b) to formulate phase transformation criteria and kinetic equations in terms of stress and plastic strain tensors (instead of pressure alone); (c) to develop multiscale continuum

  10. Transformer oil maintenance

    Energy Technology Data Exchange (ETDEWEB)

    White, J. [A.F. White Ltd., Brantford, ON (Canada)

    2002-08-01

    Proactive treatment is required in the case of transformer oil, since the oil degrades over time, which could result in the potential failure of the transformer or costly repairs. A mineral-based oil is used for transformers because of its chemical properties and dielectric strength. Water and particulate are the main contaminants found in transformer oil, affecting the quality of the oil through reduced insulation. Acid that forms in the oil when reacting with oxygen is called oxidization. It reduces the heat dissipation of the transformer as the acid forms sludge which settles on the windings of the transformer. The first step in the preventive maintenance program associated with transformer oil is the testing of the oil. The base line is established through initial testing, and subsequent annual testing identifies any changes. The minimal requirements are: (1) dielectric breakdown, a measure of the voltage conducted by the oil; (2) neutralization/acid number, which detects the level of acid present in the oil; (3) interfacial tension, which identifies the presence of polar compounds; (4) colour, which displays quality, aging and the presence of contaminants; and (5) water, which decreases the dielectric breakdown voltage. The analysis of the gases present in the oil is another useful tool in a maintenance program for the determination of a possible fault such as arcing, corona or overheated connections and is accomplished through Dissolved Gas Analysis (DGA). Remediation treatment includes upgrading the oil. Ideally, reclamation should be performed in the early stages of the acid buildup before sludging occurs. Onsite reclamation includes Fuller's earth processing and degasification, a process briefly described by the author.

  11. Coaxial pulse matching transformer

    International Nuclear Information System (INIS)

    Ledenev, V.V.; Khimenko, L.T.

    1986-01-01

    This paper describes a coaxial pulse matching transformer with comparatively simple design, increased mechanical strength, and low stray inductance. The transformer design makes it easy to change the turns ratio. The circuit of the device and an expression for the current multiplication factor are presented; experiments confirm the efficiency of the transformer. Apparatus with a coaxial transformer for producing high-power pulsed magnetic fields is designed (current pulses of 1-10 MA into a load and a natural frequency of 100 kHz)

  12. Generalized field-transforming metamaterials

    International Nuclear Information System (INIS)

    Tretyakov, Sergei A; Nefedov, Igor S; Alitalo, Pekka

    2008-01-01

    In this paper, we introduce a generalized concept of field-transforming metamaterials, which perform field transformations defined as linear relations between the original and transformed fields. These artificial media change the fields in a prescribed fashion in the volume occupied by the medium. We show what electromagnetic properties of transforming medium are required. The coefficients of these linear functions can be arbitrary scalar functions of position and frequency, which makes the approach quite general and opens a possibility to realize various unusual devices.

  13. Transformers and motors

    CERN Document Server

    Shultz, George

    1991-01-01

    Transformers and Motors is an in-depth technical reference which was originally written for the National Joint Apprenticeship Training Committee to train apprentice and journeymen electricians. This book provides detailed information for equipment installation and covers equipment maintenance and repair. The book also includes troubleshooting and replacement guidelines, and it contains a minimum of theory and math.In this easy-to-understand, practical sourcebook, you'll discover:* Explanations of the fundamental concepts of transformers and motors* Transformer connections and d

  14. The transformation techniques in path integration

    International Nuclear Information System (INIS)

    Inomata, A.

    1989-01-01

    In this paper general remarks are made concerning the time transformation techniques in path integration and their implementations. Time transformations may be divided into two classes: global (integrable) time transformations and local (nonintegrable) time transformations. Although a brief account of global time transformations is given, attention is focused on local transformations. First, time transformations in the classical Kepler problem are reviewed. Then, problems encountered in implementing a local time transformation in quantum mechanics are analyzed. A several propositions pertinent to the implementation of local time transformations, particularly basic to the local time rescaling trick in a discretized path integral, are presented

  15. Comparing relational model transformation technologies: implementing Query/View/Transformation with Triple Graph Grammars

    DEFF Research Database (Denmark)

    Greenyer, Joel; Kindler, Ekkart

    2010-01-01

    and for model-based software engineering approaches in general. QVT (Query/View/Transformation) is the transformation technology recently proposed for this purpose by the OMG. TGGs (Triple Graph Grammars) are another transformation technology proposed in the mid-nineties, used for example in the FUJABA CASE...

  16. Phase transformation and diffusion

    CERN Document Server

    Kale, G B; Dey, G K

    2008-01-01

    Given that the basic purpose of all research in materials science and technology is to tailor the properties of materials to suit specific applications, phase transformations are the natural key to the fine-tuning of the structural, mechanical and corrosion properties. A basic understanding of the kinetics and mechanisms of phase transformation is therefore of vital importance. Apart from a few cases involving crystallographic martensitic transformations, all phase transformations are mediated by diffusion. Thus, proper control and understanding of the process of diffusion during nucleation, g

  17. Transformation of Digital Ecosystems

    DEFF Research Database (Denmark)

    Henningsson, Stefan; Hedman, Jonas

    2014-01-01

    the Digital Ecosystem Technology Transformation (DETT) framework for explaining technology-based transformation of digital ecosystems by integrating theories of business and technology ecosystems. The framework depicts ecosystem transformation as distributed and emergent from micro-, meso-, and macro- level......In digital ecosystems, the fusion relation between business and technology means that the decision of technical compatibility of the offering is also the decision of how to position the firm relative to the coopetive relations that characterize business ecosystems. In this article we develop...... coopetition. The DETT framework consists an alternative to the existing explanations of digital ecosystem transformation as the rational management of one central actor balancing ecosystem tensions. We illustrate the use of the framework by a case study of transformation in the digital payment ecosystem...

  18. INFORMATION MODEL OF SOCIAL TRANSFORMATIONS

    Directory of Open Access Journals (Sweden)

    Мария Васильевна Комова

    2013-09-01

    Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the

  19. Wavelets in scientific computing

    DEFF Research Database (Denmark)

    Nielsen, Ole Møller

    1998-01-01

    the FWT can be used as a front-end for efficient image compression schemes. Part II deals with vector-parallel implementations of several variants of the Fast Wavelet Transform. We develop an efficient and scalable parallel algorithm for the FWT and derive a model for its performance. Part III...... supported wavelets in the context of multiresolution analysis. These wavelets are particularly attractive because they lead to a stable and very efficient algorithm, namely the fast wavelet transform (FWT). We give estimates for the approximation characteristics of wavelets and demonstrate how and why...... is an investigation of the potential for using the special properties of wavelets for solving partial differential equations numerically. Several approaches are identified and two of them are described in detail. The algorithms developed are applied to the nonlinear Schrödinger equation and Burgers' equation...

  20. From Fourier analysis to wavelets

    CERN Document Server

    Gomes, Jonas

    2015-01-01

    This text introduces the basic concepts of function spaces and operators, both from the continuous and discrete viewpoints.  Fourier and Window Fourier Transforms are introduced and used as a guide to arrive at the concept of Wavelet transform.  The fundamental aspects of multiresolution representation, and its importance to function discretization and to the construction of wavelets is also discussed. Emphasis is given on ideas and intuition, avoiding the heavy computations which are usually involved in the study of wavelets.  Readers should have a basic knowledge of linear algebra, calculus, and some familiarity with complex analysis.  Basic knowledge of signal and image processing is desirable. This text originated from a set of notes in Portuguese that the authors wrote for a wavelet course on the Brazilian Mathematical Colloquium in 1997 at IMPA, Rio de Janeiro.

  1. Wavelet Based Denoising for the Estimation of the State of Charge for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    2018-05-01

    Full Text Available In practical electric vehicle applications, the noise of original discharging/charging voltage (DCV signals are inevitable, which comes from electromagnetic interference and the measurement noise of the sensors. To solve such problems, the Discrete Wavelet Transform (DWT based state of charge (SOC estimation method is proposed in this paper. Through a multi-resolution analysis, the original DCV signals with noise are decomposed into different frequency sub-bands. The desired de-noised DCV signals are then reconstructed by utilizing the inverse discrete wavelet transform, based on the sure rule. With the de-noised DCV signal, the SOC and the parameters are obtained using the adaptive extended Kalman Filter algorithm, and the adaptive forgetting factor recursive least square method. Simulation and experimental results show that the SOC estimation error is less than 1%, which indicates an effective improvement in SOC estimation accuracy.

  2. Laplace transforms essentials

    CERN Document Server

    Shafii-Mousavi, Morteza

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Laplace Transforms includes the Laplace transform, the inverse Laplace transform, special functions and properties, applications to ordinary linear differential equations, Fourier tr

  3. Sustainable transformation

    DEFF Research Database (Denmark)

    Andersen, Nicolai Bo

    This paper is about sustainable transformation with a particular focus on listed buildings. It is based on the notion that sustainability is not just a question of energy conditions, but also about the building being robust. Robust architecture means that the building can be maintained and rebuilt......, that it can be adapted to changing functional needs, and that it has an architectural and cultural value. A specific proposal for a transformation that enhances the architectural qualities and building heritage values of an existing building forms the empirical material, which is discussed using different...... theoretical lenses. It is proposed that three parameters concerning the ꞌtransformabilityꞌ of the building can contribute to a more nuanced understanding of sustainable transformation: technical aspects, programmatic requirements and narrative value. It is proposed that the concept of ꞌsustainable...

  4. Identity transformation

    DEFF Research Database (Denmark)

    Neergaard, Helle; Robinson, Sarah; Jones, Sally

    , as well as the resources they have when they come to the classroom. It also incorporates perspectives from (ii) transformational learning and explores the concept of (iii) nudging from a pedagogical viewpoint, proposing it as an important tool in entrepreneurship education. The study incorporates......This paper develops the concept of ‘pedagogical nudging’ and examines four interventions in an entrepreneurship classroom and the potential it has for student identity transformation. Pedagogical nudging is positioned as a tool, which in the hands of a reflective, professional......) assists students in straddling the divide between identities, the emotions and tensions this elicits, and (iv) transform student understanding. We extend nudging theory into a new territory. Pedagogical nudging techniques may be able to unlock doors and bring our students beyond the unacknowledged...

  5. A high order multi-resolution solver for the Poisson equation with application to vortex methods

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Spietz, Henrik Juul; Walther, Jens Honore

    A high order method is presented for solving the Poisson equation subject to mixed free-space and periodic boundary conditions by using fast Fourier transforms (FFT). The high order convergence is achieved by deriving mollified Green’s functions from a high order regularization function which...

  6. Life cycle of transformer oil

    Directory of Open Access Journals (Sweden)

    Đurđević Ksenija R.

    2008-01-01

    Full Text Available The consumption of electric power is constantly increasing due to industrialization and population growth. This results in much more severe operating conditions of transformers, the most important electrical devices that make integral parts of power transmission and distribution systems. The designed operating life of the majority of worldwide transformers has already expired, which puts the increase of transformer reliability and operating life extension in the spotlight. Transformer oil plays a very important role in transformer operation, since it provides insulation and cooling, helps extinguishing sparks and dissolves gases formed during oil degradation. In addition to this, it also dissolves moisture and gases from cellulose insulation and atmosphere it is exposed to. Further and by no means less important functions of transformer are of diagnostic purpose. It has been determined that examination and inspection of insulation oil provide 70% of information on transformer condition, which can be divided in three main groups: dielectric condition, aged transformer condition and oil degradation condition. By inspecting and examining the application oil it is possible to determine the condition of insulation, oil and solid insulation (paper, as well as irregularities in transformer operation. All of the above-mentioned reasons and facts create ground for the subject of this research covering two stages of transformer oil life cycle: (1 proactive maintenance and monitoring of transformer oils in the course of utilization with reference to influence of transformer oil condition on paper insulation condition, as well as the condition of the transformer itself; (2 regeneration of transformer oils for the purpose of extension of utilization period and paper insulation revitalization potential by means of oil purification. The study highlights advantages of oil-paper insulation revitalization over oil replacement. Besides economic, there are

  7. Distributed photovoltaic grid transformers

    CERN Document Server

    Shertukde, Hemchandra Madhusudan

    2014-01-01

    The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is

  8. Chemical Transformation Simulator

    Science.gov (United States)

    The Chemical Transformation Simulator (CTS) is a web-based, high-throughput screening tool that automates the calculation and collection of physicochemical properties for an organic chemical of interest and its predicted products resulting from transformations in environmental sy...

  9. Landskabets transformation

    DEFF Research Database (Denmark)

    Munck Petersen, Rikke

    2005-01-01

    Seminaroplæg fra forskere. Faglige seminarer på KA, forår 2005. Belyser transformation af det danske landskab fysisk som holdningsmæssigt, samt hvordan phd-arbejdets egen proces håndterer den.......Seminaroplæg fra forskere. Faglige seminarer på KA, forår 2005. Belyser transformation af det danske landskab fysisk som holdningsmæssigt, samt hvordan phd-arbejdets egen proces håndterer den....

  10. A first level trigger approach for the CBM experiment

    International Nuclear Information System (INIS)

    Steinle, Christian Alexander

    2012-01-01

    In view of the very heavy CBM experiment constraints on the first level trigger, no conventional trigger is obviously applicable. Hence a fast trigger algorithm with the goal of realization in reconfigurable hardware had to be developed to fulfil all requirements of the experiment. In this connection the general Hough transform, which is already utilized in several other experiments, is used as a basis. This approach constitutes further a global method for tracking, which transforms all particle interaction points with the detector stations by means of a defined formula into a parameter space corresponding to the momentum of the particle tracks. This formula is of course developed especially for the given environment of CBM and defines thus the core of the applied three dimensional Hough transform. As the main focus of attention is furthermore on the realization of the needed data throughput, the necessary complex formula calculations give reason to outsource predefined formula results in look-up tables. This circumstance offers then collaterally the possibility to utilize any other sufficiently precise method like Runge-Kutta of fourth order for example to compute these look-up tables, because this computation can be evidently done offline without any effect on the Hough transform's processing speed. For algorithm simulation purposes the CBMROOT framework provides the module 'track', which is written in the programming language C++. This module includes many analyses for the determination of algorithm parameters, which can be even executed automatically to some extent. In addition to this, there are of course also analyses for the measurement of the algorithm's quality as well as for the individual rating of each partial step of the algorithm. Consequently the milestone of a customizable level one tracking algorithm, which can be used without any specific knowledge, is now obtained. Besides this, the investigated concepts are explicitly considered in the

  11. A first level trigger approach for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Steinle, Christian Alexander

    2012-07-01

    In view of the very heavy CBM experiment constraints on the first level trigger, no conventional trigger is obviously applicable. Hence a fast trigger algorithm with the goal of realization in reconfigurable hardware had to be developed to fulfil all requirements of the experiment. In this connection the general Hough transform, which is already utilized in several other experiments, is used as a basis. This approach constitutes further a global method for tracking, which transforms all particle interaction points with the detector stations by means of a defined formula into a parameter space corresponding to the momentum of the particle tracks. This formula is of course developed especially for the given environment of CBM and defines thus the core of the applied three dimensional Hough transform. As the main focus of attention is furthermore on the realization of the needed data throughput, the necessary complex formula calculations give reason to outsource predefined formula results in look-up tables. This circumstance offers then collaterally the possibility to utilize any other sufficiently precise method like Runge-Kutta of fourth order for example to compute these look-up tables, because this computation can be evidently done offline without any effect on the Hough transform's processing speed. For algorithm simulation purposes the CBMROOT framework provides the module 'track', which is written in the programming language C++. This module includes many analyses for the determination of algorithm parameters, which can be even executed automatically to some extent. In addition to this, there are of course also analyses for the measurement of the algorithm's quality as well as for the individual rating of each partial step of the algorithm. Consequently the milestone of a customizable level one tracking algorithm, which can be used without any specific knowledge, is now obtained. Besides this, the investigated concepts are explicitly

  12. A first level trigger approach for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Steinle, Christian Alexander

    2012-07-01

    In view of the very heavy CBM experiment constraints on the first level trigger, no conventional trigger is obviously applicable. Hence a fast trigger algorithm with the goal of realization in reconfigurable hardware had to be developed to fulfil all requirements of the experiment. In this connection the general Hough transform, which is already utilized in several other experiments, is used as a basis. This approach constitutes further a global method for tracking, which transforms all particle interaction points with the detector stations by means of a defined formula into a parameter space corresponding to the momentum of the particle tracks. This formula is of course developed especially for the given environment of CBM and defines thus the core of the applied three dimensional Hough transform. As the main focus of attention is furthermore on the realization of the needed data throughput, the necessary complex formula calculations give reason to outsource predefined formula results in look-up tables. This circumstance offers then collaterally the possibility to utilize any other sufficiently precise method like Runge-Kutta of fourth order for example to compute these look-up tables, because this computation can be evidently done offline without any effect on the Hough transform's processing speed. For algorithm simulation purposes the CBMROOT framework provides the module 'track', which is written in the programming language C++. This module includes many analyses for the determination of algorithm parameters, which can be even executed automatically to some extent. In addition to this, there are of course also analyses for the measurement of the algorithm's quality as well as for the individual rating of each partial step of the algorithm. Consequently the milestone of a customizable level one tracking algorithm, which can be used without any specific knowledge, is now obtained. Besides this, the investigated concepts are explicitly considered in the

  13. Circular Hough Transform and Local Circularity Measure for Weight Estimation of a Graph-Cut based Wood Stack Measurement

    DEFF Research Database (Denmark)

    Galsgaard, Bo; Lundtoft, Dennis Holm; Nikolov, Ivan Adriyanov

    2015-01-01

    are finally scaled and used to acquire the necessary wood stack measurements in real-world scale (in cm). The proposed system, which works automatically, has been tested on two different datasets, containing real outdoor images of logs which vary in shapes and sizes. The experimental results show......One of the time consuming tasks in the timber industry is the manually measurement of features of wood stacks. Such features include, but are not limited to, the number of the logs in a stack, their diameters distribution, and their volumes. Computer vision techniques have recently been used...... for solving this real-world industrial application. Such techniques are facing many challenges as the task is usually performed in outdoor, uncontrolled, environments. Furthermore, the logs can vary in texture and they can be occluded by different obstacles. These all make the segmentation of the wood logs...

  14. Environmental cost of distribution transformer losses

    International Nuclear Information System (INIS)

    Georgilakis, Pavlos S.

    2011-01-01

    Improvements in energy efficiency of electrical equipment reduce the greenhouse gas (GHG) emissions and contribute to the protection of the environment. Moreover, as system investment and energy costs continue to increase, electric utilities are increasingly interested in installing energy-efficient transformers at their distribution networks. This paper analyzes the impact of the environmental cost of transformer losses on the economic evaluation of distribution transformers. This environmental cost is coming form the cost to buy GHG emission credits because of the GHG emissions associated with supplying transformer losses throughout the transformer lifetime. Application results on the Hellenic power system for 21 transformer offers under 9 different scenarios indicate that the environmental cost of transformer losses can reach on average 34% and 8% of transformer purchasing price for high loss and medium loss transformers, respectively. That is why it is important to incorporate the environmental cost of transformer losses into the economic evaluation of distribution transformers.

  15. Transformative environmental governance

    Science.gov (United States)

    Transformative governance is an approach to environmental governance that has the capacity to respond to, manage, and trigger regime shifts in coupled social-ecological systems (SESs) at multiple scales. The goal of transformative governance is to actively shift degraded SESs to ...

  16. Fourier transforms principles and applications

    CERN Document Server

    Hansen, Eric W

    2014-01-01

    Fourier Transforms: Principles and Applications explains transform methods and their applications to electrical systems from circuits, antennas, and signal processors-ably guiding readers from vector space concepts through the Discrete Fourier Transform (DFT), Fourier series, and Fourier transform to other related transform methods.  Featuring chapter end summaries of key results, over two hundred examples and four hundred homework problems, and a Solutions Manual this book is perfect for graduate students in signal processing and communications as well as practicing engineers.

  17. Kinetics of phase transformations

    International Nuclear Information System (INIS)

    Thompson, M.O.; Aziz, M.J.; Stephenson, G.B.

    1992-01-01

    This volume contains papers presented at the Materials Research Society symposium on Kinetics of Phase Transformations held in Boston, Massachusetts from November 26-29, 1990. The symposium provided a forum for research results in an exceptionally broad and interdisciplinary field. Presentations covered nearly every major class of transformations including solid-solid, liquid-solid, transport phenomena and kinetics modeling. Papers involving amorphous Si, a dominant topic at the symposium, are collected in the first section followed by sections on four major areas of transformation kinetics. The symposium opened with joint sessions on ion and electron beam induced transformations in conjunction with the Surface Chemistry and Beam-Solid Interactions: symposium. Subsequent sessions focused on the areas of ordering and nonlinear diffusion kinetics, solid state reactions and amorphization, kinetics and defects of amorphous silicon, and kinetics of melting and solidification. Seven internationally recognized invited speakers reviewed many of the important problems and recent results in these areas, including defects in amorphous Si, crystal to glass transformations, ordering kinetics, solid-state amorphization, computer modeling, and liquid/solid transformations

  18. Dual-Frequency Impedance Transformer Using Coupled-Line For Ultra-High Transforming Ratio

    Directory of Open Access Journals (Sweden)

    R. K. Barik

    2017-12-01

    Full Text Available In this paper, a new type of dual-frequency impedance transformer is presented for ultra-high transforming ratio. The proposed configuration consists of parallel coupled-line, series transmission lines and short-ended stubs. The even and odd-mode analysis is applied to obtain the design equations and hence to provide an accurate solution. Three examples of the dual-frequency transformer with load impedance of 500, 1000 and 1500 Ω are designed to study the matching capability and bandwidth property. To prove the frequency agility of the proposed network, three prototypes of dual-frequency impedance transformer with transforming ratio of 10, 20 and 30 are fabricated and tested. The measured return loss is greater than 15 dB at two operating frequencies for all the prototypes. Also, the bandwidth is more than 60 MHz at each frequency band for all the prototypes. The measured return loss is found in good agreement with the circuit and full-wave simulations.

  19. Dependency Parsing with Transformed Feature

    Directory of Open Access Journals (Sweden)

    Fuxiang Wu

    2017-01-01

    Full Text Available Dependency parsing is an important subtask of natural language processing. In this paper, we propose an embedding feature transforming method for graph-based parsing, transform-based parsing, which directly utilizes the inner similarity of the features to extract information from all feature strings including the un-indexed strings and alleviate the feature sparse problem. The model transforms the extracted features to transformed features via applying a feature weight matrix, which consists of similarities between the feature strings. Since the matrix is usually rank-deficient because of similar feature strings, it would influence the strength of constraints. However, it is proven that the duplicate transformed features do not degrade the optimization algorithm: the margin infused relaxed algorithm. Moreover, this problem can be alleviated by reducing the number of the nearest transformed features of a feature. In addition, to further improve the parsing accuracy, a fusion parser is introduced to integrate transformed and original features. Our experiments verify that both transform-based and fusion parser improve the parsing accuracy compared to the corresponding feature-based parser.

  20. CPS Transformation of Beta-Redexes

    DEFF Research Database (Denmark)

    Danvy, Olivier; Nielsen, Lasse

    2005-01-01

    The extra compaction of the most compacting CPS transformation in existence, which is due to Sabry and Felleisen, is generally attributed to (1) making continuations occur first in CPS terms and (2) classifying more redexes as administrative. We show that this extra compaction is actually...... independent of the relative positions of values and continuations and furthermore that it is solely due to a context-sensitive transformation of beta-redexes. We stage the more compact CPS transformation into a first-order uncurrying phase and a context-insensitive CPS transformation. We also define a context......-insensitive CPS transformation that provides the extra compaction. This CPS transformation operates in one pass and is dependently typed....