WorldWideScience

Sample records for texture compression self-adaptive Human Visual System vector quantizer GPU

  1. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  2. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  3. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  4. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  5. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  6. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  7. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  8. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  9. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  10. Combining nonlinear multiresolution system and vector quantization for still image compression

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  11. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  12. Subband directional vector quantization in radiological image compression

    Science.gov (United States)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  13. Semilogarithmic Nonuniform Vector Quantization of Two-Dimensional Laplacean Source for Small Variance Dynamics

    Directory of Open Access Journals (Sweden)

    Z. Peric

    2012-04-01

    Full Text Available In this paper high dynamic range nonuniform two-dimensional vector quantization model for Laplacean source was provided. Semilogarithmic A-law compression characteristic was used as radial scalar compression characteristic of two-dimensional vector quantization. Optimal number value of concentric quantization domains (amplitude levels is expressed in the function of parameter A. Exact distortion analysis with obtained closed form expressions is provided. It has been shown that proposed model provides high SQNR values in wide range of variances, and overachieves quality obtained by scalar A-law quantization at same bit rate, so it can be used in various switching and adaptation implementations for realization of high quality signal compression.

  14. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    Science.gov (United States)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  15. Adaptive Matrices for Color Texture Classification

    NARCIS (Netherlands)

    Bunte, Kerstin; Giotis, Ioannis; Petkov, Nicolai; Biehl, Michael; Real, P; DiazPernil, D; MolinaAbril, H; Berciano, A; Kropatsch, W

    2011-01-01

    In this paper we introduce an integrative approach towards color texture classification learned by a supervised framework. Our approach is based on the Generalized Learning Vector Quantization (GLVQ), extended by an adaptive distance measure which is defined in the Fourier domain and 2D Gabor

  16. Visual data mining for quantized spatial data

    Science.gov (United States)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  17. Bidirectional Texture Function Compression Based on Multi-Level Vector Quantization

    Czech Academy of Sciences Publication Activity Database

    Havran, V.; Filip, Jiří; Myszkowski, K.

    2010-01-01

    Roč. 29, č. 1 (2010), s. 175-190 ISSN 0167-7055 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:EC Marie Curie ERG(CZ) 239294 Institutional research plan: CEZ:AV0Z10750506 Keywords : bidirectional texture function * BRDF * compression * SSIM Subject RIV: BD - Theory of Information Impact factor: 1.455, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/filip-0338804.pdf

  18. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  19. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  20. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  1. Adaptive Matrices and Filters for Color Texture Classification

    NARCIS (Netherlands)

    Giotis, Ioannis; Bunte, Kerstin; Petkov, Nicolai; Biehl, Michael

    In this paper we introduce an integrative approach towards color texture classification and recognition using a supervised learning framework. Our approach is based on Generalized Learning Vector Quantization (GLVQ), extended by an adaptive distance measure, which is defined in the Fourier domain,

  2. Experimental Investigation of Compression with Fixed-length Code Quantization for Convergent Access-Mobile Networks

    OpenAIRE

    L. Anet Neto; P. Chanclou; Z. Tayq; B. C. Zabada; F. Saliou; G. Simon

    2016-01-01

    We experimentally assess compression with scalar and vector quantization for fixed-mobile convergent networks. We show that four-dimensional vector quantization allows 73% compression compliant with 3GPP EVM recommendations for transmissions over 25 km SSMF with 1:16 split ratio.

  3. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  4. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  5. Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography.

    Science.gov (United States)

    Wenger, S; Ament, M; Guthe, S; Lorenz, D; Tillmann, A; Weiskopf, D; Magnor, M

    2012-12-01

    The 3D visualization of astronomical nebulae is a challenging problem since only a single 2D projection is observable from our fixed vantage point on Earth. We attempt to generate plausible and realistic looking volumetric visualizations via a tomographic approach that exploits the spherical or axial symmetry prevalent in some relevant types of nebulae. Different types of symmetry can be implemented by using different randomized distributions of virtual cameras. Our approach is based on an iterative compressed sensing reconstruction algorithm that we extend with support for position-dependent volumetric regularization and linear equality constraints. We present a distributed multi-GPU implementation that is capable of reconstructing high-resolution datasets from arbitrary projections. Its robustness and scalability are demonstrated for astronomical imagery from the Hubble Space Telescope. The resulting volumetric data is visualized using direct volume rendering. Compared to previous approaches, our method preserves a much higher amount of detail and visual variety in the 3D visualization, especially for objects with only approximate symmetry.

  6. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    Science.gov (United States)

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  7. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  8. GPU Accelerated Vector Median Filter

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  9. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  10. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    Science.gov (United States)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  11. Visibility of wavelet quantization noise

    Science.gov (United States)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  12. Geometric quantization of vector bundles and the correspondence with deformation quantization

    International Nuclear Information System (INIS)

    Hawkins, E.

    2000-01-01

    I repeat my definition for quantization of a vector bundle. For the cases of the Toeplitz and geometric quantizations of a compact Kaehler manifold, I give a construction for quantizing any smooth vector bundle, which depends functorially on a choice of connection on the bundle. Using this, the classification of formal deformation quantizations, and the formal, algebraic index theorem, I give a simple proof as to which formal deformation quantization (modulo isomorphism) is derived from a given geometric quantization. (orig.)

  13. The wavelet/scalar quantization compression standard for digital fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  14. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  15. Compression-Based Tools for Navigation with an Image Database

    Directory of Open Access Journals (Sweden)

    Giovanni Motta

    2012-01-01

    Full Text Available We present tools that can be used within a larger system referred to as a passive assistant. The system receives information from a mobile device, as well as information from an image database such as Google Street View, and employs image processing to provide useful information about a local urban environment to a user who is visually impaired. The first stage acquires and computes accurate location information, the second stage performs texture and color analysis of a scene, and the third stage provides specific object recognition and navigation information. These second and third stages rely on compression-based tools (dimensionality reduction, vector quantization, and coding that are enhanced by knowledge of (approximate location of objects.

  16. Fast vector quantization using a Bat algorithm for image compression

    Directory of Open Access Journals (Sweden)

    Chiranjeevi Karri

    2016-06-01

    Full Text Available Linde–Buzo–Gray (LBG, a traditional method of vector quantization (VQ generates a local optimal codebook which results in lower PSNR value. The performance of vector quantization (VQ depends on the appropriate codebook, so researchers proposed optimization techniques for global codebook generation. Particle swarm optimization (PSO and Firefly algorithm (FA generate an efficient codebook, but undergoes instability in convergence when particle velocity is high and non-availability of brighter fireflies in the search space respectively. In this paper, we propose a new algorithm called BA-LBG which uses Bat Algorithm on initial solution of LBG. It produces an efficient codebook with less computational time and results very good PSNR due to its automatic zooming feature using adjustable pulse emission rate and loudness of bats. From the results, we observed that BA-LBG has high PSNR compared to LBG, PSO-LBG, Quantum PSO-LBG, HBMO-LBG and FA-LBG, and its average convergence speed is 1.841 times faster than HBMO-LBG and FA-LBG but no significance difference with PSO.

  17. Real-Time GPU Implementation of Transverse Oscillation Vector Velocity Flow Imaging

    DEFF Research Database (Denmark)

    Bradway, David; Pihl, Michael Johannes; Krebs, Andreas

    2014-01-01

    Rapid estimation of blood velocity and visualization of complex flow patterns are important for clinical use of diagnostic ultrasound. This paper presents real-time processing for two-dimensional (2-D) vector flow imaging which utilizes an off-the-shelf graphics processing unit (GPU). In this work...... vector flow acquisition takes 2.3 milliseconds seconds on an Advanced Micro Devices Radeon HD 7850 GPU card. The detected velocities are accurate to within the precision limit of the output format of the display routine. Because this tool was developed as a module external to the scanner’s built...

  18. [Visual Texture Agnosia in Humans].

    Science.gov (United States)

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  19. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  20. A Novel Texture-Quantization-Based Reversible Multiple Watermarking Scheme Applied to Health Information System.

    Science.gov (United States)

    Turuk, Mousami; Dhande, Ashwin

    2018-04-01

    The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.

  1. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    Science.gov (United States)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  2. Improved stability and performance from sigma-delta modulators using 1-bit vector quantization

    DEFF Research Database (Denmark)

    Risbo, Lars

    1993-01-01

    A novel class of sigma-delta modulators is presented. The usual scalar 1-b quantizer in a sigma-delta modulator is replaced by a 1-b vector quantizer with a N-dimensional input state-vector from the linear feedback filter. Generally, the vector quantizer changes the nonlinear dynamics...... of the modulator, and a proper choice of vector quantizer can improve both system stability and coding performance. It is shown how to construct the vector quantizer in order to limit the excursions in state-space. The proposed method is demonstrated graphically for a simple second-order modulator...

  3. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  4. Accelerating three-dimensional FDTD calculations on GPU clusters for electromagnetic field simulation.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2012-01-01

    Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.

  5. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  6. Vector-Quantization using Information Theoretic Concepts

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz

    2005-01-01

    interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...

  7. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    Science.gov (United States)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  8. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    Science.gov (United States)

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  9. A visual perceptual descriptor with depth feature for image retrieval

    Science.gov (United States)

    Wang, Tianyang; Qin, Zhengrui

    2017-07-01

    This paper proposes a visual perceptual descriptor (VPD) and a new approach to extract perceptual depth feature for 2D image retrieval. VPD mimics human visual system, which can easily distinguish regions that have different textures, whereas for regions which have similar textures, color features are needed for further differentiation. We apply VPD on the gradient direction map of an image, capture texture-similar regions to generate a VPD map. We then impose the VPD map on a quantized color map and extract color features only from the overlapped regions. To reflect the nature of perceptual distance in single 2D image, we propose and extract the perceptual depth feature by computing the nuclear norm of the sparse depth map of an image. Extracted color features and the perceptual depth feature are both incorporated to a feature vector, we utilize this vector to represent an image and measure similarity. We observe that the proposed VPD + depth method achieves a promising result, and extensive experiments prove that it outperforms other typical methods on 2D image retrieval.

  10. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  11. Reducing and filtering point clouds with enhanced vector quantization.

    Science.gov (United States)

    Ferrari, Stefano; Ferrigno, Giancarlo; Piuri, Vincenzo; Borghese, N Alberto

    2007-01-01

    Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality.

  12. Optimal context quantization in lossless compression of image data sequences

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl

    2004-01-01

    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due...... to insufficient sample statistics of a given input image. We consider the problem of finding the optimal quantizer Q that quantizes the K-dimensional causal context C/sub t/=(X/sub t-t1/,X/sub t-t2/,...,X/sub t-tK/) of a source symbol X/sub t/ into one of a set of conditioning states. The optimality of context...... quantization is defined to be the minimum static or minimum adaptive code length of given a data set. For a binary source alphabet an optimal context quantizer can be computed exactly by a fast dynamic programming algorithm. Faster approximation solutions are also proposed. In case of m-ary source alphabet...

  13. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  14. SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-12-01

    The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.

  15. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  16. Phase transitions in vector quantization and neural gas

    NARCIS (Netherlands)

    Witoelar, Aree; Biehl, Michael

    The statistical physics of off-learning is applied to winner-takes-all (WTA) and rank-based vector quantization (VQ), including the neural gas (NG). The analysis is based on the limit of high training temperatures and the annealed approximation. The typical learning behavior is evaluated for systems

  17. A GPU-based mipmapping method for water surface visualization

    Science.gov (United States)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  18. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  19. Vector potential quantization and the photon wave-particle representation

    International Nuclear Information System (INIS)

    Meis, C; Dahoo, P R

    2016-01-01

    The quantization procedure of the vector potential is enhanced at a single photon state revealing the possibility for a simultaneous representation of the wave-particle nature of the photon. Its relationship to the quantum vacuum results naturally. A vector potential amplitude operator is defined showing the parallelism with the Hamiltonian of a massless particle. It is further shown that the quantized vector potential satisfies both the wave propagation equation and a linear time-dependent Schrödinger-like equation. (paper)

  20. Binary Biometric Representation through Pairwise Adaptive Phase Quantization

    NARCIS (Netherlands)

    Chen, C.; Veldhuis, Raymond N.J.

    Extracting binary strings from real-valued biometric templates is a fundamental step in template compression and protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Quantization and coding is the straightforward way to extract binary representations

  1. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  2. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

    1997-12-01

    In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

  3. Modeling Human Aesthetic Perception of Visual Textures

    NARCIS (Netherlands)

    Thumfart, Stefan; Jacobs, Richard H. A. H.; Lughofer, Edwin; Eitzinger, Christian; Cornelissen, Frans W.; Groissboeck, Werner; Richter, Roland

    Texture is extensively used in areas such as product design and architecture to convey specific aesthetic information. Using the results of a psychological experiment, we model the relationship between computational texture features and aesthetic properties of visual textures. Contrary to previous

  4. Quantized Visual Awareness

    Directory of Open Access Journals (Sweden)

    W Alexander Escobar

    2013-11-01

    Full Text Available The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  5. EP-based wavelet coefficient quantization for linear distortion ECG data compression.

    Science.gov (United States)

    Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan

    2014-07-01

    Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

    Directory of Open Access Journals (Sweden)

    B Vinoth Kumar

    2017-07-01

    Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

  7. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    Science.gov (United States)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  8. Parallel-Sequential Texture Analysis

    NARCIS (Netherlands)

    van den Broek, Egon; Singh, Sameer; Singh, Maneesha; van Rikxoort, Eva M.; Apte, Chid; Perner, Petra

    2005-01-01

    Color induced texture analysis is explored, using two texture analysis techniques: the co-occurrence matrix and the color correlogram as well as color histograms. Several quantization schemes for six color spaces and the human-based 11 color quantization scheme have been applied. The VisTex texture

  9. A logarithmic quantization index modulation for perceptually better data hiding.

    Science.gov (United States)

    Kalantari, Nima Khademi; Ahadi, Seyed Mohammad

    2010-06-01

    In this paper, a novel arrangement for quantizer levels in the Quantization Index Modulation (QIM) method is proposed. Due to perceptual advantages of logarithmic quantization, and in order to solve the problems of a previous logarithmic quantization-based method, we used the compression function of mu-Law standard for quantization. In this regard, the host signal is first transformed into the logarithmic domain using the mu-Law compression function. Then, the transformed data is quantized uniformly and the result is transformed back to the original domain using the inverse function. The scalar method is then extended to vector quantization. For this, the magnitude of each host vector is quantized on the surface of hyperspheres which follow logarithmic radii. Optimum parameter mu for both scalar and vector cases is calculated according to the host signal distribution. Moreover, inclusion of a secret key in the proposed method, similar to the dither modulation in QIM, is introduced. Performance of the proposed method in both cases is analyzed and the analytical derivations are verified through extensive simulations on artificial signals. The method is also simulated on real images and its performance is compared with previous scalar and vector quantization-based methods. Results show that this method features stronger a watermark in comparison with conventional QIM and, as a result, has better performance while it does not suffer from the drawbacks of a previously proposed logarithmic quantization algorithm.

  10. An adaptive tensor voting algorithm combined with texture spectrum

    Science.gov (United States)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  11. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  12. Fusion of deep learning architectures, multilayer feedforward networks and learning vector quantizers for deep classification learning

    NARCIS (Netherlands)

    Villmann, T.; Biehl, M.; Villmann, A.; Saralajew, S.

    2017-01-01

    The advantage of prototype based learning vector quantizers are the intuitive and simple model adaptation as well as the easy interpretability of the prototypes as class representatives for the class distribution to be learned. Although they frequently yield competitive performance and show robust

  13. Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2011-01-01

    Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.

  14. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  15. Visualizing whole-brain DTI tractography with GPU-based Tuboids and LoD management.

    Science.gov (United States)

    Petrovic, Vid; Fallon, James; Kuester, Falko

    2007-01-01

    Diffusion Tensor Imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large-collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance.

  16. Compression of seismic data: filter banks and extended transforms, synthesis and adaptation; Compression de donnees sismiques: bancs de filtres et transformees etendues, synthese et adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Duval, L.

    2000-11-01

    Wavelet and wavelet packet transforms are the most commonly used algorithms for seismic data compression. Wavelet coefficients are generally quantized and encoded by classical entropy coding techniques. We first propose in this work a compression algorithm based on the wavelet transform. The wavelet transform is used together with a zero-tree type coding, with first use in seismic applications. Classical wavelet transforms nevertheless yield a quite rigid approach, since it is often desirable to adapt the transform stage to the properties of each type of signal. We thus propose a second algorithm using, instead of wavelets, a set of so called 'extended transforms'. These transforms, originating from the filter bank theory, are parameterized. Classical examples are Malvar's Lapped Orthogonal Transforms (LOT) or de Queiroz et al. Generalized Lapped Orthogonal Transforms (GenLOT). We propose several optimization criteria to build 'extended transforms' which are adapted the properties of seismic signals. We further show that these transforms can be used with the same zero-tree type coding technique as used with wavelets. Both proposed algorithms provide exact compression rate choice, block-wise compression (in the case of extended transforms) and partial decompression for quality control or visualization. Performances are tested on a set of actual seismic data. They are evaluated for several quality measures. We also compare them to other seismic compression algorithms. (author)

  17. Introduction to Vector Field Visualization

    Science.gov (United States)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  18. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  19. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  20. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  1. USING LEARNING VECTOR QUANTIZATION METHOD FOR AUTOMATED IDENTIFICATION OF MYCOBACTERIUM TUBERCULOSIS

    Directory of Open Access Journals (Sweden)

    Endah Purwanti

    2012-01-01

    Full Text Available In this paper, we are developing an automated method for the detection of tubercle bacilli in clinical specimens, principally the sputum. This investigation is the first attempt to automatically identify TB bacilli in sputum using image processing and learning vector quantization (LVQ techniques. The evaluation of the learning vector quantization (LVQ was carried out on Tuberculosis dataset show that average of accuracy is 91,33%.

  2. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  3. Analysis of Vector Quantizers Using Transformed Codebooks with Application to Feedback-Based Multiple Antenna Systems

    Directory of Open Access Journals (Sweden)

    Bhaskar D. Rao

    2008-07-01

    Full Text Available Transformed codebooks are obtained by a transformation of a given codebook to best match the statistical environment at hand. The procedure, though suboptimal, has recently been suggested for feedback of channel state information (CSI in multiple antenna systems with correlated channels because of their simplicity and effectiveness. In this paper, we first consider the general distortion analysis of vector quantizers with transformed codebooks. Bounds on the average system distortion of this class of quantizers are provided. It exposes the effects of two kinds of suboptimality introduced by the transformed codebook, namely, the loss caused by suboptimal point density and the loss caused by mismatched Voronoi shape. We then focus our attention on the application of the proposed general framework to providing capacity analysis of a feedback-based MISO system over spatially correlated fading channels. In particular, with capacity loss as an objective function, upper and lower bounds on the average distortion of MISO systems with transformed codebooks are provided and compared to that of the optimal channel quantizers. The expressions are examined to provide interesting insights in the high and low SNR regime. Numerical and simulation results are presented which confirm the tightness of the distortion bounds.

  4. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  5. Quantized, piecewise linear filter network

    DEFF Research Database (Denmark)

    Sørensen, John Aasted

    1993-01-01

    A quantization based piecewise linear filter network is defined. A method for the training of this network based on local approximation in the input space is devised. The training is carried out by repeatedly alternating between vector quantization of the training set into quantization classes...... and equalization of the quantization classes linear filter mean square training errors. The equalization of the mean square training errors is carried out by adapting the boundaries between neighbor quantization classes such that the differences in mean square training errors are reduced...

  6. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  7. Automatic inspection of textured surfaces by support vector machines

    Science.gov (United States)

    Jahanbin, Sina; Bovik, Alan C.; Pérez, Eduardo; Nair, Dinesh

    2009-08-01

    Automatic inspection of manufactured products with natural looking textures is a challenging task. Products such as tiles, textile, leather, and lumber project image textures that cannot be modeled as periodic or otherwise regular; therefore, a stochastic modeling of local intensity distribution is required. An inspection system to replace human inspectors should be flexible in detecting flaws such as scratches, cracks, and stains occurring in various shapes and sizes that have never been seen before. A computer vision algorithm is proposed in this paper that extracts local statistical features from grey-level texture images decomposed with wavelet frames into subbands of various orientations and scales. The local features extracted are second order statistics derived from grey-level co-occurrence matrices. Subsequently, a support vector machine (SVM) classifier is trained to learn a general description of normal texture from defect-free samples. This algorithm is implemented in LabVIEW and is capable of processing natural texture images in real-time.

  8. Specialized Computer Systems for Environment Visualization

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  9. New techniques in 3D scalar and vector field visualization

    Energy Technology Data Exchange (ETDEWEB)

    Max, N.; Crawfis, R.; Becker, B.

    1993-05-05

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ``splatting`` scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ``flow volume`` of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity.

  10. New techniques in 3D scalar and vector field visualization

    International Nuclear Information System (INIS)

    Max, N.; Crawfis, R.; Becker, B.

    1993-01-01

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ''splatting'' scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ''flow volume'' of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity

  11. Tools for signal compression applications to speech and audio coding

    CERN Document Server

    Moreau, Nicolas

    2013-01-01

    This book presents tools and algorithms required to compress/uncompress signals such as speech and music. These algorithms are largely used in mobile phones, DVD players, HDTV sets, etc. In a first rather theoretical part, this book presents the standard tools used in compression systems: scalar and vector quantization, predictive quantization, transform quantization, entropy coding. In particular we show the consistency between these different tools. The second part explains how these tools are used in the latest speech and audio coders. The third part gives Matlab programs simulating t

  12. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input Quantization.

    Science.gov (United States)

    Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu

    2018-06-01

    This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.

  13. Visualization of big SPH simulations via compressed octree grids

    KAUST Repository

    Reichl, Florian

    2013-10-01

    Interactive and high-quality visualization of spatially continuous 3D fields represented by scattered distributions of billions of particles is challenging. One common approach is to resample the quantities carried by the particles to a regular grid and to render the grid via volume ray-casting. In large-scale applications such as astrophysics, however, the required grid resolution can easily exceed 10K samples per spatial dimension, letting resampling approaches appear unfeasible. In this paper we demonstrate that even in these extreme cases such approaches perform surprisingly well, both in terms of memory requirement and rendering performance. We resample the particle data to a multiresolution multiblock grid, where the resolution of the blocks is dictated by the particle distribution. From this structure we build an octree grid, and we then compress each block in the hierarchy at no visual loss using wavelet-based compression. Since decompression can be performed on the GPU, it can be integrated effectively into GPU-based out-of-core volume ray-casting. We compare our approach to the perspective grid approach which resamples at run-time into a view-aligned grid. We demonstrate considerably faster rendering times at high quality, at only a moderate memory increase compared to the raw particle set. © 2013 IEEE.

  14. Aesthetic Perception of Visual Textures: A Holistic Exploration using Texture Analysis, Psychological Experiment and Perception Modeling

    Directory of Open Access Journals (Sweden)

    Jianli eLiu

    2015-11-01

    Full Text Available Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and nonlinear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.

  15. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  16. A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs

    Directory of Open Access Journals (Sweden)

    Guixia He

    2016-01-01

    Full Text Available Sparse matrix-vector multiplication (SpMV is an important operation in scientific computations. Compressed sparse row (CSR is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs, for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing and CSR-vector (partial coalescing. Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.

  17. Mixed quantization dimensions of self-similar measures

    International Nuclear Information System (INIS)

    Dai Meifeng; Wang Xiaoli; Chen Dandan

    2012-01-01

    Highlights: ► We define the mixed quantization dimension of finitely many measures. ► Formula of mixed quantization dimensions of self-similar measures is given. ► Illustrate the behavior of mixed quantization dimension as a function of order. - Abstract: Classical multifractal analysis studies the local scaling behaviors of a single measure. However recently mixed multifractal has generated interest. The purpose of this paper is some results about the mixed quantization dimensions of self-similar measures.

  18. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  19. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  20. Subsampling-based compression and flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank

    2016-01-19

    As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.

  1. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  2. Multi-GPU Development of a Neural Networks Based Reconstructor for Adaptive Optics

    Directory of Open Access Journals (Sweden)

    Carlos González-Gutiérrez

    2018-01-01

    Full Text Available Aberrations introduced by the atmospheric turbulence in large telescopes are compensated using adaptive optics systems, where the use of deformable mirrors and multiple sensors relies on complex control systems. Recently, the development of larger scales of telescopes as the E-ELT or TMT has created a computational challenge due to the increasing complexity of the new adaptive optics systems. The Complex Atmospheric Reconstructor based on Machine Learning (CARMEN is an algorithm based on artificial neural networks, designed to compensate the atmospheric turbulence. During recent years, the use of GPUs has been proved to be a great solution to speed up the learning process of neural networks, and different frameworks have been created to ease their development. The implementation of CARMEN in different Multi-GPU frameworks is presented in this paper, along with its development in a language originally developed for GPU, like CUDA. This implementation offers the best response for all the presented cases, although its advantage of using more than one GPU occurs only in large networks.

  3. Parallel Computer System for 3D Visualization Stereo on GPU

    Science.gov (United States)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  4. Adaptive Watermarking Scheme Using Biased Shift of Quantization Index

    Directory of Open Access Journals (Sweden)

    Young-Ho Seo

    2010-01-01

    Full Text Available We propose a watermark embedding and extracting method for blind watermarking. It uses the characteristics of a scalar quantizer to comply with the recommendation in JPEG, MPEG series, or JPEG2000. Our method performs embedding of a watermark bit by shifting the corresponding frequency transform coefficient (the watermark position to a quantization index according to the value of the watermark bit, which prevents from losing the watermark information during the data compression process. The watermark can be embedded simultaneously to the quantization process without an additional process for watermarking, which means it can be performed at the same speed to the compression process. In the embedding process, a Linear Feedback Shift Register (LFSR is used to hide the watermark informations and the watermark positions. The experimental results showed that the proposed method satisfies enough robustness and imperceptibility that are the major requirements for watermarking.

  5. On a gauge theory of the self-dual field and its quantization

    International Nuclear Information System (INIS)

    Srivastava, P.P.

    1990-01-01

    A gauge theory of self-dual fields is constructed by adding a Wess-Zumino term to the recently studied formulation based on a second-order scalar field lagrangian carrying with it an auxiliary vector field to take care of the self-duality constraint in a linear fashion. The two versions are quantized using the BRST formulation following the BFV procedure. No violation of microcausality occurs and the action of the ordinary scalar field may not be written as the sum of the actions of the self- and anti-self-dual fields. (orig.)

  6. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  7. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  8. Tree-structured vector quantization of CT chest scans: Image quality and diagnostic accuracy

    International Nuclear Information System (INIS)

    Cosman, P.C.; Tseng, C.; Gray, R.M.; Olshen, R.A.; Moses, L.E.; Davidson, H.C.; Bergin, C.J.; Riskin, E.A.

    1993-01-01

    The quality of lossy compressed images is often characterized by signal-to-noise ratios, informal tests of subjective quality, or receiver operating characteristic (ROC) curves that include subjective appraisals of the value of an image for a particular application. The authors believe that for medical applications, lossy compressed images should be judged by a more natural and fundamental aspect of relative image quality: their use in making accurate diagnoses. They apply a lossy compression algorithm to medical images, and quantify the quality of the images by the diagnostic performance of radiologists, as well as by traditional signal-to-noise ratios and subjective ratings. The study is unlike previous studies of the effects of lossy compression in that they consider non-binary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, use statistical methods that are more appropriate for non-binary clinical data than are the popular ROC curves, and use low-complexity predictive tree-structured vector quantization for compression rather than DCT-based transform codes combined with entropy coding. Their diagnostic tasks are the identification of nodules (tumors) in the lungs and lymphadenopathy in the mediastinum from computerized tomography (CT) chest scans. For the image modality, compression algorithm, and diagnostic tasks they consider, the original 12 bit per pixel (bpp) CT image can be compressed to between 1 bpp and 2 bpp with no significant changes in diagnostic accuracy

  9. GPU-accelerated brain connectivity reconstruction and visualization in large-scale electron micrographs

    KAUST Repository

    Jeong, Wonki; Pfister, Hanspeter; Beyer, Johanna; Hadwiger, Markus

    2011-01-01

    for fair comparison. The main focus of this chapter is introducing the GPU algorithms and their implementation details, which are the core components of the interactive segmentation and visualization system. © 2011 Copyright © 2011 NVIDIA Corporation

  10. Geometrical Modification of Learning Vector Quantization Method for Solving Classification Problems

    Directory of Open Access Journals (Sweden)

    Korhan GÜNEL

    2016-09-01

    Full Text Available In this paper, a geometrical scheme is presented to show how to overcome an encountered problem arising from the use of generalized delta learning rule within competitive learning model. It is introduced a theoretical methodology for describing the quantization of data via rotating prototype vectors on hyper-spheres.The proposed learning algorithm is tested and verified on different multidimensional datasets including a binary class dataset and two multiclass datasets from the UCI repository, and a multiclass dataset constructed by us. The proposed method is compared with some baseline learning vector quantization variants in literature for all domains. Large number of experiments verify the performance of our proposed algorithm with acceptable accuracy and macro f1 scores.

  11. GPU-accelerated brain connectivity reconstruction and visualization in large-scale electron micrographs

    KAUST Repository

    Jeong, Wonki

    2011-01-01

    This chapter introduces a GPU-accelerated interactive, semiautomatic axon segmentation and visualization system. Two challenging problems have been addressed: the interactive 3D axon segmentation and the interactive 3D image filtering and rendering of implicit surfaces. The reconstruction of neural connections to understand the function of the brain is an emerging and active research area in neuroscience. With the advent of high-resolution scanning technologies, such as 3D light microscopy and electron microscopy (EM), reconstruction of complex 3D neural circuits from large volumes of neural tissues has become feasible. Among them, only EM data can provide sufficient resolution to identify synapses and to resolve extremely narrow neural processes. These high-resolution, large-scale datasets pose challenging problems, for example, how to process and manipulate large datasets to extract scientifically meaningful information using a compact representation in a reasonable processing time. The running time of the multiphase level set segmentation method has been measured on the CPU and GPU. The CPU version is implemented using the ITK image class and the ITK distance transform filter. The numerical part of the CPU implementation is similar to the GPU implementation for fair comparison. The main focus of this chapter is introducing the GPU algorithms and their implementation details, which are the core components of the interactive segmentation and visualization system. © 2011 Copyright © 2011 NVIDIA Corporation and Wen-mei W. Hwu Published by Elsevier Inc. All rights reserved..

  12. Fundamentals of Adaptive Intelligent Tutoring Systems for Self-Regulated Learning

    Science.gov (United States)

    2015-03-01

    ARL-SR-0318 ● MAR 2015 US Army Research Laboratory Fundamentals of Adaptive Intelligent Tutoring Systems for Self-Regulated...Adaptive Intelligent Tutoring Systems for Self-Regulated Learning by Robert A Sottilare Human Research and Engineering Directorate, ARL...TITLE AND SUBTITLE Fundamentals of Adaptive Intelligent Tutoring Systems for Self-Regulated Learning 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  13. Adaptive rate transmission for spectrum sharing system with quantized channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2011-03-01

    The capacity of a secondary link in spectrum sharing systems has been recently investigated in fading environments. In particular, the secondary transmitter is allowed to adapt its power and rate to maximize its capacity subject to the constraint of maximum interference level allowed at the primary receiver. In most of the literature, it was assumed that estimates of the channel state information (CSI) of the secondary link and the interference level are made available at the secondary transmitter via an infinite-resolution feedback links between the secondary/primary receivers and the secondary transmitter. However, the assumption of having infinite resolution feedback links is not always practical as it requires an excessive amount of bandwidth. In this paper we develop a framework for optimizing the performance of the secondary link in terms of the average spectral efficiency assuming quantized CSI available at the secondary transmitter. We develop a computationally efficient algorithm for optimally quantizing the CSI and finding the optimal power and rate employed at the cognitive transmitter for each quantized CSI level so as to maximize the average spectral efficiency. Our results give the number of bits required to represent the CSI sufficient to achieve almost the maximum average spectral efficiency attained using full knowledge of the CSI for Rayleigh fading channels. © 2011 IEEE.

  14. Adaptive rate transmission for spectrum sharing system with quantized channel state information

    KAUST Repository

    Abdallah, Mohamed M.; Salem, Ahmed H.; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2011-01-01

    The capacity of a secondary link in spectrum sharing systems has been recently investigated in fading environments. In particular, the secondary transmitter is allowed to adapt its power and rate to maximize its capacity subject to the constraint of maximum interference level allowed at the primary receiver. In most of the literature, it was assumed that estimates of the channel state information (CSI) of the secondary link and the interference level are made available at the secondary transmitter via an infinite-resolution feedback links between the secondary/primary receivers and the secondary transmitter. However, the assumption of having infinite resolution feedback links is not always practical as it requires an excessive amount of bandwidth. In this paper we develop a framework for optimizing the performance of the secondary link in terms of the average spectral efficiency assuming quantized CSI available at the secondary transmitter. We develop a computationally efficient algorithm for optimally quantizing the CSI and finding the optimal power and rate employed at the cognitive transmitter for each quantized CSI level so as to maximize the average spectral efficiency. Our results give the number of bits required to represent the CSI sufficient to achieve almost the maximum average spectral efficiency attained using full knowledge of the CSI for Rayleigh fading channels. © 2011 IEEE.

  15. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  16. Direct Neural Conversion from Human Fibroblasts Using Self-Regulating and Nonintegrating Viral Vectors

    Directory of Open Access Journals (Sweden)

    Shong Lau

    2014-12-01

    Full Text Available Summary: Recent findings show that human fibroblasts can be directly programmed into functional neurons without passing via a proliferative stem cell intermediate. These findings open up the possibility of generating subtype-specific neurons of human origin for therapeutic use from fetal cell, from patients themselves, or from matched donors. In this study, we present an improved system for direct neural conversion of human fibroblasts. The neural reprogramming genes are regulated by the neuron-specific microRNA, miR-124, such that each cell turns off expression of the reprogramming genes once the cell has reached a stable neuronal fate. The regulated system can be combined with integrase-deficient vectors, providing a nonintegrative and self-regulated conversion system that rids problems associated with the integration of viral transgenes into the host genome. These modifications make the system suitable for clinical use and therefore represent a major step forward in the development of induced neurons for cell therapy. : Lau et al. now use miRNA targeting to build a self-regulating neural conversion system. Combined with nonintegrating vectors, this system can efficiently drive conversion of human fibroblasts into functional induced neurons (iNs suitable for clinical applications.

  17. Compression of Human Motion Animation Using the Reduction of Interjoint Correlation

    Directory of Open Access Journals (Sweden)

    Shiyu Li

    2008-01-01

    Full Text Available We propose two compression methods for the human motion in 3D space, based on the forward and inverse kinematics. In a motion chain, a movement of each joint is represented by a series of vector signals in 3D space. In general, specific types of joints such as end effectors often require higher precision than other general types of joints in, for example, CG animation and robot manipulation. The first method, which combines wavelet transform and forward kinematics, enables users to reconstruct the end effectors more precisely. Moreover, progressive decoding can be realized. The distortion of parent joint coming from quantization affects its child joint in turn and is accumulated to the end effector. To address this problem and to control the movement of the whole body, we propose a prediction method further based on the inverse kinematics. This method achieves efficient compression with a higher compression ratio and higher quality of the motion data. By comparing with some conventional methods, we demonstrate the advantage of ours with typical motions.

  18. Compression of Human Motion Animation Using the Reduction of Interjoint Correlation

    Directory of Open Access Journals (Sweden)

    Li Shiyu

    2008-01-01

    Full Text Available Abstract We propose two compression methods for the human motion in 3D space, based on the forward and inverse kinematics. In a motion chain, a movement of each joint is represented by a series of vector signals in 3D space. In general, specific types of joints such as end effectors often require higher precision than other general types of joints in, for example, CG animation and robot manipulation. The first method, which combines wavelet transform and forward kinematics, enables users to reconstruct the end effectors more precisely. Moreover, progressive decoding can be realized. The distortion of parent joint coming from quantization affects its child joint in turn and is accumulated to the end effector. To address this problem and to control the movement of the whole body, we propose a prediction method further based on the inverse kinematics. This method achieves efficient compression with a higher compression ratio and higher quality of the motion data. By comparing with some conventional methods, we demonstrate the advantage of ours with typical motions.

  19. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems

    KAUST Repository

    Abdelfattah, Ahmad; Gendron, É ric; Gratadour, Damien; Keyes, David E.; Ltaief, Hatem; Sevin, Arnaud; Vidal, Fabrice

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.

  20. Depth perception: cuttlefish (Sepia officinalis) respond to visual texture density gradients.

    Science.gov (United States)

    Josef, Noam; Mann, Ofri; Sykes, António V; Fiorito, Graziano; Reis, João; Maccusker, Steven; Shashar, Nadav

    2014-11-01

    Studies concerning the perceptual processes of animals are not only interesting, but are fundamental to the understanding of other developments in information processing among non-humans. Carefully used visual illusions have been proven to be an informative tool for understanding visual perception. In this behavioral study, we demonstrate that cuttlefish are responsive to visual cues involving texture gradients. Specifically, 12 out of 14 animals avoided swimming over a solid surface with a gradient picture that to humans resembles an illusionary crevasse, while only 5 out of 14 avoided a non-illusionary texture. Since texture gradients are well-known cues for depth perception in vertebrates, we suggest that these cephalopods were responding to the depth illusion created by the texture density gradient. Density gradients and relative densities are key features in distance perception in vertebrates. Our results suggest that they are fundamental features of vision in general, appearing also in cephalopods.

  1. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  2. Global properties of systems quantized via bundles

    International Nuclear Information System (INIS)

    Doebner, H.D.; Werth, J.E.

    1978-03-01

    Take a smooth manifold M and a Lie algebra action (g-ation) theta on M as the geometrical arena of a physical system moving on M with momenta given by theta. It is proposed to quantize the system with a Mackey-like method via the associated vector bundle xisub(rho) of a principal bundle xi=(P,π,M,H) with model dependent structure group H and with g-action phi on P lifted from theta on M. This (quantization) bundle xisub(rho) gives the Hilbert space equal to L 2 (xisub(rho),ω) of the system as the linear space of sections in xisub(rho) being square integrable with respect to a volume form ω on M; the usual position operators are obtained; phi leads to a vector field representation D(phisub(rho),theta) of g in an hence Hilbert space to momentum operators. So Hilbert space carries the quantum kinematics. In this quantuzation the physically important connection between geometrical properties of the system, e.g. quasi-completeness of theta and G-maximality of phisub(rho), and global properties of its quantized kinematics, e.g. skew-adjointness of the momenta and integrability of D(phisub(rho), theta) can easily be studied. The relation to Nelson's construction of a skew-adjoint non-integrable Lie algebra representation and to Palais' local G-action is discussed. Finally the results are applied to actions induced by coverings as examples of non-maximal phisub(rho) on Esub(rho) lifted from maximal theta on M which lead to direct consequences for the corresponding quantum kinematics

  3. Adaptive Watermarking Algorithm in DCT Domain Based on Chaos

    Directory of Open Access Journals (Sweden)

    Wenhao Wang

    2013-05-01

    Full Text Available In order to improve the security, robustness and invisibility of the digital watermarking, a new adaptive watermarking algorithm is proposed in this paper. Firstly, this algorithm uses chaos sequence, which Logistic chaotic mapping produces, to encrypt the watermark image. And then the original image is divided into many sub-blocks and discrete cosine transform (DCT.The watermark information is embedded into sub-blocks medium coefficients. With the features of Human Visual System (HVS and image texture sufficiently taken into account during embedding, the embedding intensity of watermark is able to adaptively adjust according to HVS and texture characteristic. The watermarking is embedded into the different sub-blocks coefficients. Experiment results haven shown that the proposed algorithm is robust against the attacks of general image processing methods, such as noise, cut, filtering and JPEG compression, and receives a good tradeoff between invisible and robustness, and better security.

  4. Interactions between motion and form processing in the human visual system.

    Science.gov (United States)

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  5. Spatial compression impairs prism-adaptation in healthy individuals

    Directory of Open Access Journals (Sweden)

    Rachel J Scriven

    2013-05-01

    Full Text Available Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation is effective in ameliorating some neglect behaviours, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control processes in prism-adaptation may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced strategic control might result from a failure to detect prism-induced reaching errors properly either because a the size of the error is underestimated in compressed visual space or b pathologically increased error detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether strategic control and subsequent aftereffects were abnormal compared to standard prism adaptation. Each participant completed three prism-adaptation procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During prism-adaptation, visual-feedback of the reach could be compressed, perturbed by noise or represented veridically. Compressed visual space significantly reduced strategic control and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms.

  6. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  7. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  8. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  9. Reduced-Complexity Deterministic Annealing for Vector Quantizer Design

    Directory of Open Access Journals (Sweden)

    Ortega Antonio

    2005-01-01

    Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.

  10. Quantization of the minimal and non-minimal vector field in curved space

    OpenAIRE

    Toms, David J.

    2015-01-01

    The local momentum space method is used to study the quantized massive vector field (the Proca field) with the possible addition of non-minimal terms. Heat kernel coefficients are calculated and used to evaluate the divergent part of the one-loop effective action. It is shown that the naive expression for the effective action that one would write down based on the minimal coupling case needs modification. We adopt a Faddeev-Jackiw method of quantization and consider the case of an ultrastatic...

  11. A GPU-accelerated implicit meshless method for compressible flows

    Science.gov (United States)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  12. Turbulence Visualization at the Terascale on Desktop PCs

    KAUST Repository

    Treib, M.

    2012-12-01

    Despite the ongoing efforts in turbulence research, the universal properties of the turbulence small-scale structure and the relationships between small-and large-scale turbulent motions are not yet fully understood. The visually guided exploration of turbulence features, including the interactive selection and simultaneous visualization of multiple features, can further progress our understanding of turbulence. Accomplishing this task for flow fields in which the full turbulence spectrum is well resolved is challenging on desktop computers. This is due to the extreme resolution of such fields, requiring memory and bandwidth capacities going beyond what is currently available. To overcome these limitations, we present a GPU system for feature-based turbulence visualization that works on a compressed flow field representation. We use a wavelet-based compression scheme including run-length and entropy encoding, which can be decoded on the GPU and embedded into brick-based volume ray-casting. This enables a drastic reduction of the data to be streamed from disk to GPU memory. Our system derives turbulence properties directly from the velocity gradient tensor, and it either renders these properties in turn or generates and renders scalar feature volumes. The quality and efficiency of the system is demonstrated in the visualization of two unsteady turbulence simulations, each comprising a spatio-temporal resolution of 10244. On a desktop computer, the system can visualize each time step in 5 seconds, and it achieves about three times this rate for the visualization of a scalar feature volume. © 1995-2012 IEEE.

  13. KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-11

    KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA\\'s standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.

  14. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  15. Using Geometrical Properties for Fast Indexation of Gaussian Vector Quantizers

    Directory of Open Access Journals (Sweden)

    Vassilieva EA

    2007-01-01

    Full Text Available Vector quantization is a classical method used in mobile communications. Each sequence of samples of the discretized vocal signal is associated to the closest -dimensional codevector of a given set called codebook. Only the binary indices of these codevectors (the codewords are transmitted over the channel. Since channels are generally noisy, the codewords received are often slightly different from the codewords sent. In order to minimize the distortion of the original signal due to this noisy transmission, codevectors indexed by one-bit different codewords should have a small mutual Euclidean distance. This paper is devoted to this problem of index assignment of binary codewords to the codevectors. When the vector quantizer has a Gaussian structure, we show that a fast index assignment algorithm based on simple geometrical and combinatorial considerations can improve the SNR at the receiver by 5dB with respect to a purely random assignment. We also show that in the Gaussian case this algorithm outperforms the classical combinatorial approach in the field.

  16. Relational adaptivity - enacting human-centric systems design

    DEFF Research Database (Denmark)

    Petersen, Kjell Yngve

    2016-01-01

    Human centered design approaches places the experiencing human at the center of concern, situated in relation to the dynamics of the environmental condition and the variables of the system of control and sensing. Taking the approach of enacted design methods to enforce the experience...... of the inhabitant as core in human-centered design solutions, the intelligence of the connected sensors is suggested to be developed as an actual learning and self-adjusting adaptive environment, where the adaptive system is part of a negotiation with users on the qualities of the environment. We will present...... a fully functional sketching environment for adaptive sensor-control systems, which enable integration of the complex situation of everyday activities and human well-being. The proposed sketching environment allows for the development of sensor systems related to lighting conditions and human occupancy...

  17. Integrating principal component analysis and vector quantization with support vector regression for sulfur content prediction in HDS process

    Directory of Open Access Journals (Sweden)

    Shokri Saeid

    2015-01-01

    Full Text Available An accurate prediction of sulfur content is very important for the proper operation and product quality control in hydrodesulfurization (HDS process. For this purpose, a reliable data- driven soft sensors utilizing Support Vector Regression (SVR was developed and the effects of integrating Vector Quantization (VQ with Principle Component Analysis (PCA were studied on the assessment of this soft sensor. First, in pre-processing step the PCA and VQ techniques were used to reduce dimensions of the original input datasets. Then, the compressed datasets were used as input variables for the SVR model. Experimental data from the HDS setup were employed to validate the proposed integrated model. The integration of VQ/PCA techniques with SVR model was able to increase the prediction accuracy of SVR. The obtained results show that integrated technique (VQ-SVR was better than (PCA-SVR in prediction accuracy. Also, VQ decreased the sum of the training and test time of SVR model in comparison with PCA. For further evaluation, the performance of VQ-SVR model was also compared to that of SVR. The obtained results indicated that VQ-SVR model delivered the best satisfactory predicting performance (AARE= 0.0668 and R2= 0.995 in comparison with investigated models.

  18. Adaptive semantics visualization

    CERN Document Server

    Nazemi, Kawa

    2016-01-01

    This book introduces a novel approach for intelligent visualizations that adapts the different visual variables and data processing to human’s behavior and given tasks. Thereby a number of new algorithms and methods are introduced to satisfy the human need of information and knowledge and enable a usable and attractive way of information acquisition. Each method and algorithm is illustrated in a replicable way to enable the reproduction of the entire “SemaVis” system or parts of it. The introduced evaluation is scientifically well-designed and performed with more than enough participants to validate the benefits of the methods. Beside the introduced new approaches and algorithms, readers may find a sophisticated literature review in Information Visualization and Visual Analytics, Semantics and information extraction, and intelligent and adaptive systems. This book is based on an awarded and distinguished doctoral thesis in computer science.

  19. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  20. Selective Extraction of Entangled Textures via Adaptive PDE Transform

    Directory of Open Access Journals (Sweden)

    Yang Wang

    2012-01-01

    Full Text Available Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method.

  1. Visual texture perception via graph-based semi-supervised learning

    Science.gov (United States)

    Zhang, Qin; Dong, Junyu; Zhong, Guoqiang

    2018-04-01

    Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.

  2. Extreme Compression and Modeling of Bidirectional Texture Function

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Filip, Jiří

    2007-01-01

    Roč. 29, č. 10 (2007), s. 1859-1865 ISSN 0162-8828 R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Rough texture * 3D texture * BTF * texture synthesis * texture modeling * data compression Subject RIV: BD - Theory of Information Impact factor: 3.579, year: 2007 http://doi.ieeecomputersociety.org/10.1109/TPAMI.2007.1139

  3. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna

    2015-05-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. \\'output-sensitive\\' algorithms and system designs. This leads to recent output-sensitive approaches that are \\'ray-guided\\', \\'visualization-driven\\' or \\'display-aware\\'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  4. State-of-the-Art in GPU-Based Large-Scale Volume Visualization

    KAUST Repository

    Beyer, Johanna; Hadwiger, Markus; Pfister, Hanspeter

    2015-01-01

    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. 'output-sensitive' algorithms and system designs. This leads to recent output-sensitive approaches that are 'ray-guided', 'visualization-driven' or 'display-aware'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey. © 2015 The Eurographics Association and John Wiley & Sons Ltd.

  5. NMF-mGPU: non-negative matrix factorization on multi-GPU systems.

    Science.gov (United States)

    Mejía-Roa, Edgardo; Tabas-Madrid, Daniel; Setoain, Javier; García, Carlos; Tirado, Francisco; Pascual-Montano, Alberto

    2015-02-13

    In the last few years, the Non-negative Matrix Factorization ( NMF ) technique has gained a great interest among the Bioinformatics community, since it is able to extract interpretable parts from high-dimensional datasets. However, the computing time required to process large data matrices may become impractical, even for a parallel application running on a multiprocessors cluster. In this paper, we present NMF-mGPU, an efficient and easy-to-use implementation of the NMF algorithm that takes advantage of the high computing performance delivered by Graphics-Processing Units ( GPUs ). Driven by the ever-growing demands from the video-games industry, graphics cards usually provided in PCs and laptops have evolved from simple graphics-drawing platforms into high-performance programmable systems that can be used as coprocessors for linear-algebra operations. However, these devices may have a limited amount of on-board memory, which is not considered by other NMF implementations on GPU. NMF-mGPU is based on CUDA ( Compute Unified Device Architecture ), the NVIDIA's framework for GPU computing. On devices with low memory available, large input matrices are blockwise transferred from the system's main memory to the GPU's memory, and processed accordingly. In addition, NMF-mGPU has been explicitly optimized for the different CUDA architectures. Finally, platforms with multiple GPUs can be synchronized through MPI ( Message Passing Interface ). In a four-GPU system, this implementation is about 120 times faster than a single conventional processor, and more than four times faster than a single GPU device (i.e., a super-linear speedup). Applications of GPUs in Bioinformatics are getting more and more attention due to their outstanding performance when compared to traditional processors. In addition, their relatively low price represents a highly cost-effective alternative to conventional clusters. In life sciences, this results in an excellent opportunity to facilitate the

  6. A robust H.264/AVC video watermarking scheme with drift compensation.

    Science.gov (United States)

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  7. Interactions between motion and form processing in the human visual system

    Directory of Open Access Journals (Sweden)

    George eMather

    2013-05-01

    Full Text Available The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by ‘motion-streaks’ influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  8. Optimizing Vector-Quantization Processor Architecture for Intelligent Query-Search Applications

    Science.gov (United States)

    Xu, Huaiyu; Mita, Yoshio; Shibata, Tadashi

    2002-04-01

    The architecture of a very large scale integration (VLSI) vector-quantization processor (VQP) has been optimized to develop a general-purpose intelligent query-search agent. The agent performs a similarity-based search in a large-volume database. Although similarity-based search processing is computationally very expensive, latency-free searches have become possible due to the highly parallel maximum-likelihood search architecture of the VQP chip. Three architectures of the VQP chip have been studied and their performances are compared. In order to give reasonable searching results according to the different policies, the concept of penalty function has been introduced into the VQP. An E-commerce real-estate agency system has been developed using the VQP chip implemented in a field-programmable gate array (FPGA) and the effectiveness of such an agency system has been demonstrated.

  9. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. [Los Alamos National Lab., NM (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  10. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    Science.gov (United States)

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  11. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    Science.gov (United States)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  12. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  13. GPU Linear algebra extensions for GNU/Octave

    International Nuclear Information System (INIS)

    Bosi, L B; Mariotti, M; Santocchia, A

    2012-01-01

    Octave is one of the most widely used open source tools for numerical analysis and liner algebra. Our project aims to improve Octave by introducing support for GPU computing in order to speed up some linear algebra operations. The core of our work is a C library that executes some BLAS operations concerning vector- vector, vector matrix and matrix-matrix functions on the GPU. OpenCL functions are used to program GPU kernels, which are bound within the GNU/octave framework. We report the project implementation design and some preliminary results about performance.

  14. A self-adaptive feedforward rf control system for linacs

    International Nuclear Information System (INIS)

    Zhang Renshan; Ben-Zvi, I.; Xie Jialin

    1993-01-01

    The design and performance of a self-adaptive feedforward rf control system are reported. The system was built for the linac of the Accelerator Test Facility (ATF) at Brookhaven National Laboratory. Variables of time along the linac macropulse, such as field or phase are discretized and represented as vectors. Upon turn-on or after a large change in the operating-point, the control system acquires the response of the system to test signal vectors and generates a linearized system response matrix. During operation an error vector is generated by comparing the linac variable vectors and a target vector. The error vector is multiplied by the inverse of the system's matrix to generate a correction vector is added to an operating point vector. This control system can be used to control a klystron to produce flat rf amplitude and phase pulses, to control a rf cavity to reduce the rf field fluctuation, and to compensate the energy spread among bunches in a rf linac. Beam loading effects can be corrected and a programmed ramp can be produced. The performance of the control system has been evaluated on the control of a klystron's output as well as an rf cavity. Both amplitude and phase have been regulated simultaneously. In initial tests, the rf output from a klystron has been regulated to an amplitude fluctuation of less than ±0.3% and phase variation of less than ±0.6deg. The rf field of the ATF's photo-cathode microwave gun cavity has been regulated to ±5% in amplitude and simultaneously to ±1deg in phase. Regulating just the rf field amplitude in the rf gun cavity, we have achieved amplitude fluctuation of less than ±2%. (orig.)

  15. Deep Learning Policy Quantization

    NARCIS (Netherlands)

    van de Wolfshaar, Jos; Wiering, Marco; Schomaker, Lambertus

    2018-01-01

    We introduce a novel type of actor-critic approach for deep reinforcement learning which is based on learning vector quantization. We replace the softmax operator of the policy with a more general and more flexible operator that is similar to the robust soft learning vector quantization algorithm.

  16. Three dimensional range geometry and texture data compression with space-filling curves.

    Science.gov (United States)

    Chen, Xia; Zhang, Song

    2017-10-16

    This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.

  17. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. (Los Alamos National Lab., NM (United States)); Hopper, T. (Federal Bureau of Investigation, Washington, DC (United States))

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  18. An Adaptive Dense Matching Method for Airborne Images Using Texture Information

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-01-01

    Full Text Available Semi-global matching (SGM is essentially a discrete optimization for the disparity value of each pixel, under the assumption of disparity continuities. SGM overcomes the influence of the disparity discontinuities by a set of parameters. Using smaller parameters, the continuity constraint is weakened, which will cause significant noises in planar and textureless areas, reflected as the fluctuations on the final surface reconstruction. On the other hands, larger parameters will impose too much constraints on continuities, which may lead to losses of sharp features. To address this problem, this paper proposes an adaptive dense stereo matching methods for airborne images using with texture information. Firstly, the texture is quantified, and under the assumption that disparity variation is directly proportional to the texture information, the adaptive parameters are gauged accordingly. Second, SGM is adopted to optimize the discrete disparities using the adaptively tuned parameters. Experimental evaluations using the ISPRS benchmark dataset and images obtained by the SWDC-5 have revealed that the proposed method will significantly improve the visual qualities of the point clouds.

  19. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    Directory of Open Access Journals (Sweden)

    Xinghao Jiang

    2014-01-01

    Full Text Available A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  20. Vector Quantization of Harmonic Magnitudes in Speech Coding Applications—A Survey and New Technique

    Directory of Open Access Journals (Sweden)

    Wai C. Chu

    2004-12-01

    Full Text Available A harmonic coder extracts the harmonic components of a signal and represents them efficiently using a few parameters. The principles of harmonic coding have become quite successful and several standardized speech and audio coders are based on it. One of the key issues in harmonic coder design is in the quantization of harmonic magnitudes, where many propositions have appeared in the literature. The objective of this paper is to provide a survey of the various techniques that have appeared in the literature for vector quantization of harmonic magnitudes, with emphasis on those adopted by the major speech coding standards; these include constant magnitude approximation, partial quantization, dimension conversion, and variable-dimension vector quantization (VDVQ. In addition, a refined VDVQ technique is proposed where experimental data are provided to demonstrate its effectiveness.

  1. Brain MR Image Restoration Using an Automatic Trilateral Filter With GPU-Based Acceleration.

    Science.gov (United States)

    Chang, Herng-Hua; Li, Cheng-Yuan; Gallogly, Audrey Haihong

    2018-02-01

    Noise reduction in brain magnetic resonance (MR) images has been a challenging and demanding task. This study develops a new trilateral filter that aims to achieve robust and efficient image restoration. Extended from the bilateral filter, the proposed algorithm contains one additional intensity similarity funct-ion, which compensates for the unique characteristics of noise in brain MR images. An entropy function adaptive to intensity variations is introduced to regulate the contributions of the weighting components. To hasten the computation, parallel computing based on the graphics processing unit (GPU) strategy is explored with emphasis on memory allocations and thread distributions. To automate the filtration, image texture feature analysis associated with machine learning is investigated. Among the 98 candidate features, the sequential forward floating selection scheme is employed to acquire the optimal texture features for regularization. Subsequently, a two-stage classifier that consists of support vector machines and artificial neural networks is established to predict the filter parameters for automation. A speedup gain of 757 was reached to process an entire MR image volume of 256 × 256 × 256 pixels, which completed within 0.5 s. Automatic restoration results revealed high accuracy with an ensemble average relative error of 0.53 ± 0.85% in terms of the peak signal-to-noise ratio. This self-regulating trilateral filter outperformed many state-of-the-art noise reduction methods both qualitatively and quantitatively. We believe that this new image restoration algorithm is of potential in many brain MR image processing applications that require expedition and automation.

  2. 9th Workshop on Self-Organizing Maps

    CERN Document Server

    Príncipe, José; Zegers, Pablo

    2013-01-01

    Self-organizing maps (SOMs) were developed by Teuvo Kohonen in the early eighties. Since then more than 10,000 works have been based on SOMs. SOMs are unsupervised neural networks useful for clustering and visualization purposes. Many SOM applications have been developed in engineering and science, and other fields. This book contains refereed papers presented at the 9th Workshop on Self-Organizing Maps (WSOM 2012) held at the Universidad de Chile, Santiago, Chile, on December 12-14, 2012. The workshop brought together researchers and practitioners in the field of self-organizing systems. Among the book chapters there are excellent examples of the use of SOMs in agriculture, computer science, data visualization, health systems, economics, engineering, social sciences, text and image analysis, and time series analysis. Other chapters present the latest theoretical work on SOMs as well as Learning Vector Quantization (LVQ) methods.

  3. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Science.gov (United States)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  4. 10th Workshop on Self-Organizing Maps

    CERN Document Server

    Schleif, Frank-Michael; Kaden, Marika; Lange, Mandy

    2014-01-01

    The book collects the scientific contributions presented at the 10th Workshop on Self-Organizing Maps (WSOM 2014) held at the University of Applied Sciences Mittweida, Mittweida (Germany, Saxony), on July 2–4, 2014. Starting with the first WSOM-workshop 1997 in Helsinki this workshop focuses on newest results in the field of supervised and unsupervised vector quantization like self-organizing maps for data mining and data classification.   This 10th WSOM brought together more than 50 researchers, experts and practitioners in the beautiful small town Mittweida in Saxony (Germany) nearby the mountains Erzgebirge to discuss new developments in the field of unsupervised self-organizing vector quantization systems and learning vector quantization approaches for classification. The book contains the accepted papers of the workshop after a careful review process as well as summaries of the invited talks.   Among these book chapters there are excellent examples of the use of self-organizing maps in agriculture, ...

  5. Combining fine texture and coarse color features for color texture classification

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-11-01

    Color texture classification plays an important role in computer vision applications because texture and color are two fundamental visual features. To classify the color texture via extracting discriminative color texture features in real time, we present an approach of combining the fine texture and coarse color features for color texture classification. First, the input image is transformed from RGB to HSV color space to separate texture and color information. Second, the scale-selective completed local binary count (CLBC) algorithm is introduced to extract the fine texture feature from the V component in HSV color space. Third, both H and S components are quantized at an optimal coarse level. Furthermore, the joint histogram of H and S components is calculated, which is considered as the coarse color feature. Finally, the fine texture and coarse color features are combined as the final descriptor and the nearest subspace classifier is used for classification. Experimental results on CUReT, KTH-TIPS, and New-BarkTex databases demonstrate that the proposed method achieves state-of-the-art classification performance. Moreover, the proposed method is fast enough for real-time applications.

  6. A Search Complexity Improvement of Vector Quantization to Immittance Spectral Frequency Coefficients in AMR-WB Speech Codec

    Directory of Open Access Journals (Sweden)

    Bing-Jhih Yao

    2016-09-01

    Full Text Available An adaptive multi-rate wideband (AMR-WB code is a speech codec developed on the basis of an algebraic code-excited linear-prediction (ACELP coding technique, and has a double advantage of low bit rates and high speech quality. This coding technique is widely used in modern mobile communication systems for a high speech quality in handheld devices. However, a major disadvantage is that a vector quantization (VQ of immittance spectral frequency (ISF coefficients occupies a significant computational load in the AMR-WB encoder. Hence, this paper presents a triangular inequality elimination (TIE algorithm combined with a dynamic mechanism and an intersection mechanism, abbreviated as the DI-TIE algorithm, to remarkably improve the complexity of ISF coefficient quantization in the AMR-WB speech codec. Both mechanisms are designed in a way that recursively enhances the performance of the TIE algorithm. At the end of this work, this proposal is experimentally validated as a superior search algorithm relative to a conventional TIE, a multiple TIE (MTIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for search load comparison, this work provides a search load reduction above 77%, a figure far beyond 36% in the TIE, 49% in the MTIE, and 68% in the EEENNS approach.

  7. A Hybrid Vector Quantization Combining a Tree Structure and a Voronoi Diagram

    Directory of Open Access Journals (Sweden)

    Yeou-Jiunn Chen

    2014-01-01

    Full Text Available Multimedia data is a popular communication medium, but requires substantial storage space and network bandwidth. Vector quantization (VQ is suitable for multimedia data applications because of its simple architecture, fast decoding ability, and high compression rate. Full-search VQ can typically be used to determine optimal codewords, but requires considerable computational time and resources. In this study, a hybrid VQ combining a tree structure and a Voronoi diagram is proposed to improve VQ efficiency. To efficiently reduce the search space, a tree structure integrated with principal component analysis is proposed, to rapidly determine an initial codeword in low-dimensional space. To increase accuracy, a Voronoi diagram is applied to precisely enlarge the search space by modeling relations between each codeword. This enables an optimal codeword to be efficiently identified by rippling an optimal neighbor from parts of neighboring Voronoi regions. The experimental results demonstrated that the proposed approach improved VQ performance, outperforming other approaches. The proposed approach also satisfies the requirements of handheld device application, namely, the use of limited memory and network bandwidth, when a suitable number of dimensions in principal component analysis is selected.

  8. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  9. Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

    Science.gov (United States)

    Kestener, Pierre

    2017-10-01

    RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

  10. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  11. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    International Nuclear Information System (INIS)

    Zhong, Z; Zhuang, L; Gu, X; Wang, J; Chen, H; Zhen, X

    2016-01-01

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  12. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Z; Zhuang, L [Wayne State University, Detroit, MI (United States); Gu, X; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Chen, H; Zhen, X [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.

  13. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...... in the postprocessing. We focus on deringing and present a scheme which aims at suppressing ringing artifacts, while maintaining the sharpness of the texture. The goal is to improve the visual quality, so perceptual blur and ringing metrics are used in addition to PSNR evaluation. The performance of the new `pure......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....

  14. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  15. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    Directory of Open Access Journals (Sweden)

    Hailong Xu

    2016-03-01

    Full Text Available Nowadays, software-defined radio (SDR has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP and Space-Frequency Adaptive Processing (SFAP are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  16. Self-Adaptive Systems for Machine Intelligence

    CERN Document Server

    He, Haibo

    2011-01-01

    This book will advance the understanding and application of self-adaptive intelligent systems; therefore it will potentially benefit the long-term goal of replicating certain levels of brain-like intelligence in complex and networked engineering systems. It will provide new approaches for adaptive systems within uncertain environments. This will provide an opportunity to evaluate the strengths and weaknesses of the current state-of-the-art of knowledge, give rise to new research directions, and educate future professionals in this domain. Self-adaptive intelligent systems have wide application

  17. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  18. Implementation of self-organizing neural networks for visuo-motor control of an industrial robot.

    Science.gov (United States)

    Walter, J A; Schulten, K I

    1993-01-01

    The implementation of two neural network algorithms for visuo-motor control of an industrial robot (Puma 562) is reported. The first algorithm uses a vector quantization technique, the ;neural-gas' network, together with an error correction scheme based on a Widrow-Hoff-type learning rule. The second algorithm employs an extended self-organizing feature map algorithm. Based on visual information provided by two cameras, the robot learns to position its end effector without an external teacher. Within only 3000 training steps, the robot-camera system is capable of reducing the positioning error of the robot's end effector to approximately 0.1% of the linear dimension of the work space. By employing adaptive feedback the robot succeeds in compensating not only slow calibration drifts, but also sudden changes in its geometry. Hardware aspects of the robot-camera system are discussed.

  19. A hybrid video compression based on zerotree wavelet structure

    International Nuclear Information System (INIS)

    Kilic, Ilker; Yilmaz, Reyat

    2009-01-01

    A video compression algorithm comparable to the standard techniques at low bit rates is presented in this paper. The overlapping block motion compensation (OBMC) is combined with discrete wavelet transform which followed by Lloyd-Max quantization and zerotree wavelet (ZTW) structure. The novel feature of this coding scheme is the combination of hierarchical finite state vector quantization (HFSVQ) with the ZTW to encode the quantized wavelet coefficients. It is seen that the proposed video encoder (ZTW-HFSVQ) performs better than the MPEG-4 and Zerotree Entropy Coding (ZTE). (author)

  20. System Identification with Quantized Observations

    CERN Document Server

    Wang, Le Yi; Zhang, Jifeng; Zhao, Yanlong

    2010-01-01

    This book presents recently developed methodologies that utilize quantized information in system identification and explores their potential in extending control capabilities for systems with limited sensor information or networked systems. The results of these methodologies can be applied to signal processing and control design of communication and computer networks, sensor networks, mobile agents, coordinated data fusion, remote sensing, telemedicine, and other fields in which noise-corrupted quantized data need to be processed. Providing a comprehensive coverage of quantized identification,

  1. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    Science.gov (United States)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that

  2. High-throughput protein crystallization on the World Community Grid and the GPU

    International Nuclear Information System (INIS)

    Kotseruba, Yulia; Cumbaa, Christian A; Jurisica, Igor

    2012-01-01

    We have developed CPU and GPU versions of an automated image analysis and classification system for protein crystallization trial images from the Hauptman Woodward Institute's High-Throughput Screening lab. The analysis step computes 12,375 numerical features per image. Using these features, we have trained a classifier that distinguishes 11 different crystallization outcomes, recognizing 80% of all crystals, 94% of clear drops, 94% of precipitates. The computing requirements for this analysis system are large. The complete HWI archive of 120 million images is being processed by the donated CPU cycles on World Community Grid, with a GPU phase launching in early 2012. The main computational burden of the analysis is the measure of textural (GLCM) features within the image at multiple neighbourhoods, distances, and at multiple greyscale intensity resolutions. CPU runtime averages 4,092 seconds (single threaded) on an Intel Xeon, but only 65 seconds on an NVIDIA Tesla C2050. We report on the process of adapting the C++ code to OpenCL, optimized for multiple platforms.

  3. Adaptive nonseparable vector lifting scheme for digital holographic data compression.

    Science.gov (United States)

    Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric

    2015-01-01

    Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction.

  4. 3D Mesh Compression and Transmission for Mobile Robotic Applications

    Directory of Open Access Journals (Sweden)

    Bailin Yang

    2016-01-01

    Full Text Available Mobile robots are useful for environment exploration and rescue operations. In such applications, it is crucial to accurately analyse and represent an environment, providing appropriate inputs for motion planning in order to support robot navigation and operations. 2D mapping methods are simple but cannot handle multilevel or multistory environments. To address this problem, 3D mapping methods generate structural 3D representations of the robot operating environment and its objects by 3D mesh reconstruction. However, they face the challenge of efficiently transmitting those 3D representations to system modules for 3D mapping, motion planning, and robot operation visualization. This paper proposes a quality-driven mesh compression and transmission method to address this. Our method is efficient, as it compresses a mesh by quantizing its transformed vertices without the need to spend time constructing an a-priori structure over the mesh. A visual distortion function is developed to govern the level of quantization, allowing mesh transmission to be controlled under different network conditions or time constraints. Our experiments demonstrate how the visual quality of a mesh can be manipulated by the visual distortion function.

  5. Quantization rules for strongly chaotic systems

    International Nuclear Information System (INIS)

    Aurich, R.; Bolte, J.

    1992-09-01

    We discuss the quantization of strongly chaotic systems and apply several quantization rules to a model system given by the unconstrained motion of a particle on a compact surface of constant negative Gaussian curvature. We study the periodic-orbit theory for distinct symmetry classes corresponding to a parity operation which is always present when such a surface has genus two. Recently, several quantization rules based on periodic orbit theory have been introduced. We compare quantizations using the dynamical zeta function Z(s) with the quantization condition cos(π N(E)) = 0, where a periodix-orbit expression for the spectral staircase N(E) is used. A general discussion of the efficiency of periodic-orbit quantization then allows us to compare the different methods. The system dependence of the efficiency, which is determined by the topological entropy τ and the mean level density anti d(E), is emphasized. (orig.)

  6. Parametric recursive system identification and self-adaptive modeling of the human energy metabolism for adaptive control of fat weight.

    Science.gov (United States)

    Őri, Zsolt P

    2017-05-01

    A mathematical model has been developed to facilitate indirect measurements of difficult to measure variables of the human energy metabolism on a daily basis. The model performs recursive system identification of the parameters of the metabolic model of the human energy metabolism using the law of conservation of energy and principle of indirect calorimetry. Self-adaptive models of the utilized energy intake prediction, macronutrient oxidation rates, and daily body composition changes were created utilizing Kalman filter and the nominal trajectory methods. The accuracy of the models was tested in a simulation study utilizing data from the Minnesota starvation and overfeeding study. With biweekly macronutrient intake measurements, the average prediction error of the utilized carbohydrate intake was -23.2 ± 53.8 kcal/day, fat intake was 11.0 ± 72.3 kcal/day, and protein was 3.7 ± 16.3 kcal/day. The fat and fat-free mass changes were estimated with an error of 0.44 ± 1.16 g/day for fat and -2.6 ± 64.98 g/day for fat-free mass. The daily metabolized macronutrient energy intake and/or daily macronutrient oxidation rate and the daily body composition change from directly measured serial data are optimally predicted with a self-adaptive model with Kalman filter that uses recursive system identification.

  7. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  8. Completely quantized collapse and consequences

    International Nuclear Information System (INIS)

    Pearle, Philip

    2005-01-01

    Promotion of quantum theory from a theory of measurement to a theory of reality requires an unambiguous specification of the ensemble of realizable states (and each state's probability of realization). Although not yet achieved within the framework of standard quantum theory, it has been achieved within the framework of the continuous spontaneous localization (CSL) wave-function collapse model. In CSL, a classical random field w(x,t) interacts with quantum particles. The state vector corresponding to each w(x,t) is a realizable state. In this paper, I consider a previously presented model, which is predictively equivalent to CSL. In this completely quantized collapse (CQC) model, the classical random field is quantized. It is represented by the operator W(x,t) which satisfies [W(x,t),W(x ' ,t ' )]=0. The ensemble of realizable states is described by a single state vector, the 'ensemble vector'. Each superposed state which comprises the ensemble vector at time t is the direct product of an eigenstate of W(x,t ' ), for all x and for 0≤t ' ≤t, and the CSL state corresponding to that eigenvalue. These states never interfere (they satisfy a superselection rule at any time), they only branch, so the ensemble vector may be considered to be, as Schroedinger put it, a 'catalog' of the realizable states. In this context, many different interpretations (e.g., many worlds, environmental decoherence, consistent histories, modal interpretation) may be satisfactorily applied. Using this description, a long-standing problem is resolved, where the energy comes from the particles gain due to the narrowing of their wave packets by the collapse mechanism. It is shown how to define the energy of the random field and its energy of interaction with particles so that total energy is conserved for the ensemble of realizable states. As a by-product, since the random-field energy spectrum is unbounded, its canonical conjugate, a self-adjoint time operator, can be discussed. Finally, CSL

  9. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  10. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.

    Science.gov (United States)

    Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I

    2015-01-01

    This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.

  11. Quantized Self-Assembly of Discotic Rings in a Liquid Crystal Confined in Nanopores

    Science.gov (United States)

    Sentker, Kathrin; Zantop, Arne W.; Lippmann, Milena; Hofmann, Tommy; Seeck, Oliver H.; Kityk, Andriy V.; Yildirim, Arda; Schönhals, Andreas; Mazza, Marco G.; Huber, Patrick

    2018-02-01

    Disklike molecules with aromatic cores spontaneously stack up in linear columns with high, one-dimensional charge carrier mobilities along the columnar axes, making them prominent model systems for functional, self-organized matter. We show by high-resolution optical birefringence and synchrotron-based x-ray diffraction that confining a thermotropic discotic liquid crystal in cylindrical nanopores induces a quantized formation of annular layers consisting of concentric circular bent columns, unknown in the bulk state. Starting from the walls this ring self-assembly propagates layer by layer towards the pore center in the supercooled domain of the bulk isotropic-columnar transition and thus allows one to switch on and off reversibly single, nanosized rings through small temperature variations. By establishing a Gibbs free energy phase diagram we trace the phase transition quantization to the discreteness of the layers' excess bend deformation energies in comparison to the thermal energy, even for this near room-temperature system. Monte Carlo simulations yielding spatially resolved nematic order parameters, density maps, and bond-orientational order parameters corroborate the universality and robustness of the confinement-induced columnar ring formation as well as its quantized nature.

  12. Effects of Visual Food Texture on Taste Perception

    Directory of Open Access Journals (Sweden)

    Katsunori Okajima

    2011-10-01

    Full Text Available Food color affects taste perception. However, the possible effects of the visual texture of a foodstuff on taste and flavor, without associated changes to color, are currently unknown. We conducted a series of experiments designed to investigate how the visual texture and appearance of food influences its perceived taste and flavor by developing an Augmented Reality system. Participants observed a video of tomato ketchup as a food stimulus on a white dish placed behind a flat LC-display on which was mounted a video camera. The luminance distribution of the ketchup in the dynamic video was continuously and quantitatively modified by tracking specified colors in real-time. We changed the skewness of the luminance histogram of each frame in the video keeping the xy-chromaticity values intact. Participants watched themselves dip a spoon into the ketchup from the video feed (which could be altered, but then ate it with their eyes closed. They reported before and after tasting the ketchup on the perceived consistency (a liquid to solid continuum the food looked and felt and how tasty it looked or felt. The experimental results suggest that visual texture, independent of color, affects the taste and flavor as well as the appearance of foods.

  13. Three-State Locally Adaptive Texture Preserving Filter for Radar and Optical Image Processing

    Directory of Open Access Journals (Sweden)

    Jaakko T. Astola

    2005-05-01

    Full Text Available Textural features are one of the most important types of useful information contained in images. In practice, these features are commonly masked by noise. Relatively little attention has been paid to texture preserving properties of noise attenuation methods. This stimulates solving the following tasks: (1 to analyze the texture preservation properties of various filters; and (2 to design image processing methods capable to preserve texture features well and to effectively reduce noise. This paper deals with examining texture feature preserving properties of different filters. The study is performed for a set of texture samples and different noise variances. The locally adaptive three-state schemes are proposed for which texture is considered as a particular class. For “detection” of texture regions, several classifiers are proposed and analyzed. As shown, an appropriate trade-off of the designed filter properties is provided. This is demonstrated quantitatively for artificial test images and is confirmed visually for real-life images.

  14. Human visual system-based color image steganography using the contourlet transform

    Science.gov (United States)

    Abdul, W.; Carré, P.; Gaborit, P.

    2010-01-01

    We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.

  15. Orientation dependent slip and twinning during compression and tension of strongly textured magnesium AZ31 alloy

    Energy Technology Data Exchange (ETDEWEB)

    Al-Samman, T., E-mail: al-samman@imm.rwth-aachen.de [Institut fuer Metallkunde und Metallphysik, RWTH Aachen, Kopernikusstr. 14, D-52064 Aachen (Germany); Li, X. [Institut fuer Metallkunde und Metallphysik, RWTH Aachen, Kopernikusstr. 14, D-52064 Aachen (Germany); Chowdhury, S. Ghosh [CSIR National Metallurgical Laboratory, MST Division, Jamshedpur 831007 (India)

    2010-06-15

    Over recent years there have been a remarkable number of studies dealing with compression of magnesium. A literature search, however, shows a noticeably less number of papers concerned with tension and a very few papers comparing both modes, systematically, in one study. The current investigation reports the anisotropic deformation behavior and concomitant texture and microstructure evolution investigated in uniaxial tension and compression tests in two sample directions performed on an extruded commercial magnesium alloy AZ31 at different Z conditions. For specimens with the loading direction parallel to the extrusion axis, the tension-compression strength anisotropy was pronounced at high Z conditions. Loading at 45{sup o} from the extrusion axis yielded a tension-compression strength behavior that was close to isotropic. During tensile loading along the extrusion direction the extrusion texture resists twinning and favors prismatic slip (contrary to compression). This renders the shape change maximum in the basal plane and equal to zero along the c-axis, which resulted in the orientation of individual grains remaining virtually intact during all tension tests at different Z conditions. For the other investigated sample direction, straining was accommodated along the c-axis, which was associated with a lattice rotation, and thus, a change of crystal orientation. Uniaxial compression at a low Z condition (400 deg. C/10{sup -4} s{sup -1}) yielded a desired texture degeneration, which was explained on the basis of a more homogeneous partitioning of slip systems that reduces anisotropy and enhanced dynamic recrystallization (DRX), which counteracts the strong deformation texture. The critical strains for the nucleation of DRX in tensiled specimens at the highest investigated Z condition (200 deg. C/10{sup -2} s{sup -1}) were found to range between 4% and 5.6%.

  16. Dynamic Self-Adaptive Reliability Control for Electric-Hydraulic Systems

    Directory of Open Access Journals (Sweden)

    Yi Wan

    2015-02-01

    Full Text Available The high-speed electric-hydraulic proportional control is a new development of the hydraulic control technique with high reliability, low cost, efficient energy, and easy maintenance; it is widely used in industrial manufacturing and production. However, there are still some unresolved challenges, the most notable being the requirements of high stability and real-time by the classical control algorithm due to its high nonlinear characteristics. We propose a dynamic self-adaptive mixed control method based on the least squares support vector machine (LSSVM and the genetic algorithm for high-speed electric-hydraulic proportional control systems in this paper; LSSVM is used to identify and adjust online a nonlinear electric-hydraulic proportional system, and the genetic algorithm is used to optimize the control law of the controlled system and dynamic self-adaptive internal model control and predictive control are implemented by using the mixed intelligent method. The internal model and the inverse control model are online adjusted together. At the same time, a time-dependent Hankel matrix is constructed based on sample data; thus finite dimensional solution can be optimized on finite dimensional space. The results of simulation experiments show that the dynamic characteristics are greatly improved by the mixed intelligent control strategy, and good tracking and high stability are met in condition of high frequency response.

  17. Self-contained anti-static adapter for compressed gas dust blowing devices

    International Nuclear Information System (INIS)

    Schwartz, L.H.; Miller, S.W.; Severud, C.N. Jr.

    1984-01-01

    An anti-static adapter which enhances the operation of compressed gas dust blowing devices by allowing the safe use of a radioactive source to ionize a gas stream. The adapter may be used and handled safely without special precautions on the part of the operator

  18. An Energy Efficient Cognitive Radio System with Quantized Soft Sensing and Duration Analysis

    KAUST Repository

    Alabbasi, Abdulrahman

    2015-03-09

    In this paper, an energy efficient cognitive radio system is proposed. The proposed design optimizes the secondary user transmission power and the sensing duration combined with soft-sensing information to minimize the energy per goodbit. Due to the non-convex nature of the problem we prove its pseudo-convexity to guarantee the optimal solution. Furthermore, a quantization scheme, that discretize the softsensing information, is proposed and analyzed to reduce the overload of the continuously adapted power. Numerical results show that our proposed system outperforms the benchmark systems. The impact of the quantization levels and other system parameters is evaluated in the numerical results.

  19. Face recognition algorithm using extended vector quantization histogram features.

    Science.gov (United States)

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  20. A dynamic counterpart of Lamb vector in viscous compressible aerodynamics

    International Nuclear Information System (INIS)

    Liu, L Q; Wu, J Z; Shi, Y P; Zhu, J Y

    2014-01-01

    The Lamb vector is known to play a key role in incompressible fluid dynamics and vortex dynamics. In particular, in low-speed steady aerodynamics it is solely responsible for the total force acting on a moving body, known as the vortex force, with the classic two-dimensional (exact) Kutta–Joukowski theorem and three-dimensional (linearized) lifting-line theory as the most famous special applications. In this paper we identify an innovative dynamic counterpart of the Lamb vector in viscous compressible aerodynamics, which we call the compressible Lamb vector. Mathematically, we present a theorem on the dynamic far-field decay law of the vorticity and dilatation fields, and thereby prove that the generalized Lamb vector enjoys exactly the same integral properties as the Lamb vector does in incompressible flow, and hence the vortex-force theory can be generalized to compressible flow with exactly the same general formulation. Moreover, for steady flow of polytropic gas, we show that physically the force exerted on a moving body by the gas consists of a transverse force produced by the original Lamb vector and a new longitudinal force that reflects the effects of compression and irreversible thermodynamics. (paper)

  1. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    Science.gov (United States)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  2. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    Science.gov (United States)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  3. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  4. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  5. Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach

    Directory of Open Access Journals (Sweden)

    Kenji Sawada

    2013-01-01

    Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.

  6. New techniques for the scientific visualization of three-dimensional multi-variate and vector fields

    Energy Technology Data Exchange (ETDEWEB)

    Crawfis, Roger A. [Univ. of California, Davis, CA (United States)

    1995-10-01

    Volume rendering allows us to represent a density cloud with ideal properties (single scattering, no self-shadowing, etc.). Scientific visualization utilizes this technique by mapping an abstract variable or property in a computer simulation to a synthetic density cloud. This thesis extends volume rendering from its limitation of isotropic density clouds to anisotropic and/or noisy density clouds. Design aspects of these techniques are discussed that aid in the comprehension of scientific information. Anisotropic volume rendering is used to represent vector based quantities in scientific visualization. Velocity and vorticity in a fluid flow, electric and magnetic waves in an electromagnetic simulation, and blood flow within the body are examples of vector based information within a computer simulation or gathered from instrumentation. Understand these fields can be crucial to understanding the overall physics or physiology. Three techniques for representing three-dimensional vector fields are presented: Line Bundles, Textured Splats and Hair Splats. These techniques are aimed at providing a high-level (qualitative) overview of the flows, offering the user a substantial amount of information with a single image or animation. Non-homogenous volume rendering is used to represent multiple variables. Computer simulations can typically have over thirty variables, which describe properties whose understanding are useful to the scientist. Trying to understand each of these separately can be time consuming. Trying to understand any cause and effect relationships between different variables can be impossible. NoiseSplats is introduced to represent two or more properties in a single volume rendering of the data. This technique is also aimed at providing a qualitative overview of the flows.

  7. The FBI compression standard for digitized fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  8. Spectral analysis for systems of atoms and molecules coupled to the quantized radiation field

    International Nuclear Information System (INIS)

    Bach, V.; Sigal, I.M.

    1999-01-01

    We consider systems of static nuclei and electrons - atoms and molecules - coupled to the quantized radiation field. The interactions between electrons and the soft modes of the quantized electromagnetic field are described by minimal coupling, p→p-eA(x), where A(x) is the electromagnetic vector potential with an ultraviolet cutoff. If the interactions between the electrons and the quantized radiation field are turned off, the atom or molecule is assumed to have at least one bound state. We prove that, for sufficiently small values of the fine structure constant α, the interacting system has a ground state corresponding to the bottom of its energy spectrum. For an atom, we prove that its excited states above the ground state turn into metastable states whose life-times we estimate. Furthermore the energy spectrum is absolutely continuous, except, perhaps,in a small interval above the ground state energy and around the threshold energies of the atom or molecule. (orig.)

  9. DySOA : Making service systems self-adaptive

    NARCIS (Netherlands)

    Siljee, J; Bosloper, [No Value; Nijhuis, J; Hammer, D; Benatallah, B; Casati, F; Traverso, P

    2005-01-01

    Service-centric systems exist in a very dynamic environment. This requires these systems to adapt at runtime in order to keep fulfilling their QoS. In order to create self-adaptive service systems, developers should not only design the service architecture, but also need to design the

  10. Cartographic continuum rendering based on color and texture interpolation to enhance photo-realism perception

    Science.gov (United States)

    Hoarau, Charlotte; Christophe, Sidonie

    2017-05-01

    Graphic interfaces of geoportals allow visualizing and overlaying various (visually) heterogeneous geographical data, often by image blending: vector data, maps, aerial imagery, Digital Terrain Model, etc. Map design and geo-visualization may benefit from methods and tools to hybrid, i.e. visually integrate, heterogeneous geographical data and cartographic representations. In this paper, we aim at designing continuous hybrid visualizations between ortho-imagery and symbolized vector data, in order to control a particular visual property, i.e. the photo-realism perception. The natural appearance (colors, textures) and various texture effects are used to drive the control the photo-realism level of the visualization: color and texture interpolation blocks have been developed. We present a global design method that allows to manipulate the behavior of those interpolation blocks on each type of geographical layer, in various ways, in order to provide various cartographic continua.

  11. Block Compressed Sensing of Images Using Adaptive Granular Reconstruction

    Directory of Open Access Journals (Sweden)

    Ran Li

    2016-01-01

    Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.

  12. Perceptual asymmetry in texture perception.

    Science.gov (United States)

    Williams, D; Julesz, B

    1992-07-15

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.

  13. Structure Sensitive Hashing With Adaptive Product Quantization.

    Science.gov (United States)

    Liu, Xianglong; Du, Bowen; Deng, Cheng; Liu, Ming; Lang, Bo

    2016-10-01

    Hashing has been proved as an attractive solution to approximate nearest neighbor search, owing to its theoretical guarantee and computational efficiency. Though most of prior hashing algorithms can achieve low memory and computation consumption by pursuing compact hash codes, however, they are still far beyond the capability of learning discriminative hash functions from the data with complex inherent structure among them. To address this issue, in this paper, we propose a structure sensitive hashing based on cluster prototypes, which explicitly exploits both global and local structures. An alternating optimization algorithm, respectively, minimizing the quantization loss and spectral embedding loss, is presented to simultaneously discover the cluster prototypes for each hash function, and optimally assign unique binary codes to them satisfying the affinity alignment between them. For hash codes of a desired length, an adaptive bit assignment is further appended to the product quantization of the subspaces, approximating the Hamming distances and meanwhile balancing the variance among hash functions. Experimental results on four large-scale benchmarks CIFAR-10, NUS-WIDE, SIFT1M, and GIST1M demonstrate that our approach significantly outperforms state-of-the-art hashing methods in terms of semantic and metric neighbor search.

  14. A hydraulic hybrid propulsion method for automobiles with self-adaptive system

    International Nuclear Information System (INIS)

    Wu, Wei; Hu, Jibin; Yuan, Shihua; Di, Chongfeng

    2016-01-01

    A hydraulic hybrid vehicle with the self-adaptive system is proposed. The mode-switching between the driving mode and the hydraulic regenerative braking mode is realised by the pressure cross-feedback control. Extensive simulated and tested results are presented. The control parameters are reduced and the energy efficiency can be increased by the self-adaptive system. The mode-switching response is fast. The response time can be adjusted by changing the controlling spool diameter of the hydraulic operated check valve in the self-adaptive system. The closing of the valve becomes faster with a smaller controlling spool diameter. The hydraulic regenerative braking mode can be achieved by changing the hydraulic transformer controlled angle. Compared with the convention electric-hydraulic system, the self-adaptive system for the hydraulic hybrid vehicle mode-switching has a higher reliability and a lower cost. The efficiency of the hydraulic regenerative braking is also increased. - Highlights: • A new hybrid system with a self-adaptive system for automobiles is presented. • The mode-switching is realised by the pressure cross-feedback control. • The energy efficiency can be increased with the self-adaptive system. • The control parameters are reduced with the self-adaptive system.

  15. Differentiation of Enhancing Glioma and Primary Central Nervous System Lymphoma by Texture-Based Machine Learning.

    Science.gov (United States)

    Alcaide-Leon, P; Dufort, P; Geraldo, A F; Alshafai, L; Maralani, P J; Spears, J; Bharatha, A

    2017-06-01

    Accurate preoperative differentiation of primary central nervous system lymphoma and enhancing glioma is essential to avoid unnecessary neurosurgical resection in patients with primary central nervous system lymphoma. The purpose of the study was to evaluate the diagnostic performance of a machine-learning algorithm by using texture analysis of contrast-enhanced T1-weighted images for differentiation of primary central nervous system lymphoma and enhancing glioma. Seventy-one adult patients with enhancing gliomas and 35 adult patients with primary central nervous system lymphomas were included. The tumors were manually contoured on contrast-enhanced T1WI, and the resulting volumes of interest were mined for textural features and subjected to a support vector machine-based machine-learning protocol. Three readers classified the tumors independently on contrast-enhanced T1WI. Areas under the receiver operating characteristic curves were estimated for each reader and for the support vector machine classifier. A noninferiority test for diagnostic accuracy based on paired areas under the receiver operating characteristic curve was performed with a noninferiority margin of 0.15. The mean areas under the receiver operating characteristic curve were 0.877 (95% CI, 0.798-0.955) for the support vector machine classifier; 0.878 (95% CI, 0.807-0.949) for reader 1; 0.899 (95% CI, 0.833-0.966) for reader 2; and 0.845 (95% CI, 0.757-0.933) for reader 3. The mean area under the receiver operating characteristic curve of the support vector machine classifier was significantly noninferior to the mean area under the curve of reader 1 ( P = .021), reader 2 ( P = .035), and reader 3 ( P = .007). Support vector machine classification based on textural features of contrast-enhanced T1WI is noninferior to expert human evaluation in the differentiation of primary central nervous system lymphoma and enhancing glioma. © 2017 by American Journal of Neuroradiology.

  16. Spatial adaptation of the cortical visual evoked potential of the cat.

    Science.gov (United States)

    Bonds, A B

    1984-06-01

    Adaptation that is spatially specific for the adapting pattern has been seen psychophysically in humans. This is indirect evidence for independent analyzers (putatively single units) that are specific for orientation and spatial frequency in the human visual system, but it is unclear how global adaptation characteristics may be related to single unit performance. Spatially specific adaptation was sought in the cat visual evoked potential (VEP), with a view towards relating this phenomenon with what we know of cat single units. Adaptation to sine-wave gratings results in a temporary loss of cat VEP amplitude, with induction and recovery similar to that seen in human psychophysical experiments. The amplitude loss was specific for both the spatial frequency and orientation of the adapting pattern. The bandwidth of adaptation was not unlike the average selectivity of a population of cat single units.

  17. A Kalman Filter for SINS Self-Alignment Based on Vector Observation.

    Science.gov (United States)

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-29

    In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.

  18. Separate channels for processing form, texture, and color: evidence from FMRI adaptation and visual object agnosia.

    Science.gov (United States)

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-10-01

    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places.

  19. The CUBLAS and CULA based GPU acceleration of adaptive finite element framework for bioluminescence tomography.

    Science.gov (United States)

    Zhang, Bo; Yang, Xiang; Yang, Fei; Yang, Xin; Qin, Chenghu; Han, Dong; Ma, Xibo; Liu, Kai; Tian, Jie

    2010-09-13

    In molecular imaging (MI), especially the optical molecular imaging, bioluminescence tomography (BLT) emerges as an effective imaging modality for small animal imaging. The finite element methods (FEMs), especially the adaptive finite element (AFE) framework, play an important role in BLT. The processing speed of the FEMs and the AFE framework still needs to be improved, although the multi-thread CPU technology and the multi CPU technology have already been applied. In this paper, we for the first time introduce a new kind of acceleration technology to accelerate the AFE framework for BLT, using the graphics processing unit (GPU). Besides the processing speed, the GPU technology can get a balance between the cost and performance. The CUBLAS and CULA are two main important and powerful libraries for programming on NVIDIA GPUs. With the help of CUBLAS and CULA, it is easy to code on NVIDIA GPU and there is no need to worry about the details about the hardware environment of a specific GPU. The numerical experiments are designed to show the necessity, effect and application of the proposed CUBLAS and CULA based GPU acceleration. From the results of the experiments, we can reach the conclusion that the proposed CUBLAS and CULA based GPU acceleration method can improve the processing speed of the AFE framework very much while getting a balance between cost and performance.

  20. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Science.gov (United States)

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116

  1. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2011-09-01

    Full Text Available Due to their weak received signal power, Global Positioning System (GPS signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs. However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU coupled with a new generation Graphics Processing Unit (GPU having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.

  2. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  3. System matrix computation vs storage on GPU: A comparative study in cone beam CT.

    Science.gov (United States)

    Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2018-02-01

    Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative

  4. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  5. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    International Nuclear Information System (INIS)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-01-01

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  6. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  7. Real-time Image Generation for Compressive Light Field Displays

    International Nuclear Information System (INIS)

    Wetzstein, G; Lanman, D; Hirsch, M; Raskar, R

    2013-01-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  8. Incompressible SPH (ISPH) with fast Poisson solver on a GPU

    Science.gov (United States)

    Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.

    2018-05-01

    This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.

  9. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-01-01

    Full Text Available We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM model and Local Phase Quantization (LPQ to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  10. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  11. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    Science.gov (United States)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual

  12. Fuzzy Relational Compression Applied on Feature Vectors for Infant Cry Recognition

    Science.gov (United States)

    Reyes-Galaviz, Orion Fausto; Reyes-García, Carlos Alberto

    Data compression is always advisable when it comes to handling and processing information quickly and efficiently. There are two main problems that need to be solved when it comes to handling data; store information in smaller spaces and processes it in the shortest possible time. When it comes to infant cry analysis (ICA), there is always the need to construct large sound repositories from crying babies. Samples that have to be analyzed and be used to train and test pattern recognition algorithms; making this a time consuming task when working with uncompressed feature vectors. In this work, we show a simple, but efficient, method that uses Fuzzy Relational Product (FRP) to compresses the information inside a feature vector, building with this a compressed matrix that will help us recognize two kinds of pathologies in infants; Asphyxia and Deafness. We describe the sound analysis, which consists on the extraction of Mel Frequency Cepstral Coefficients that generate vectors which will later be compressed by using FRP. There is also a description of the infant cry database used in this work, along with the training and testing of a Time Delay Neural Network with the compressed features, which shows a performance of 96.44% with our proposed feature vector compression.

  13. GPU-based Branchless Distance-Driven Projection and Backprojection.

    Science.gov (United States)

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  14. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  15. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  16. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  17. Cross-Language Plagiarism Detection System Using Latent Semantic Analysis and Learning Vector Quantization

    Directory of Open Access Journals (Sweden)

    Anak Agung Putri Ratna

    2017-06-01

    Full Text Available Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising. Due to the syntax disparity between Bahasa Indonesia and English, most of the existing methods for automated cross-language plagiarism detection do not provide satisfactory results. This paper analyses the probability of developing Latent Semantic Analysis (LSA for a computerized cross-language plagiarism detector for two languages with different syntax. To improve performance, various alterations in LSA are suggested. By using a linear vector quantization (LVQ classifier in the LSA and taking into account the Frobenius norm, output has reached up to 65.98% in accuracy. The results of the experiments showed that the best accuracy achieved is 87% with a document size of 6 words, and the document definition size must be kept below 10 words in order to maintain high accuracy. Additionally, based on experimental results, this paper suggests utilizing the frequency occurrence method as opposed to the binary method for the term–document matrix construction.

  18. Coronary angiogram video compression for remote browsing and archiving applications.

    Science.gov (United States)

    Ouled Zaid, Azza; Fradj, Bilel Ben

    2010-12-01

    In this paper, we propose a H.264/AVC based compression technique adapted to coronary angiograms. H.264/AVC coder has proven to use the most advanced and accurate motion compensation process, but, at the cost of high computational complexity. On the other hand, analysis of coronary X-ray images reveals large areas containing no diagnostically important information. Our contribution is to exploit the energy characteristics in slice equal size regions to determine the regions with relevant information content, to be encoded using the H.264 coding paradigm. The others regions, are compressed using fixed block motion compensation and conventional hard-decision quantization. Experiments have shown that at the same bitrate, this procedure reduces the H.264 coder computing time of about 25% while attaining the same visual quality. A subjective assessment, based on the consensus approach leads to a compression ratio of 30:1 which insures both a diagnostic adequacy and a sufficient compression in regards to storage and transmission requirements. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Quantized Predictive Control over Erasure Channels

    DEFF Research Database (Denmark)

    E. Quevedo, Daniel; Østergaard, Jan

    2009-01-01

    .i.d. dropouts, the controller transmits data packets containing quantized plant input predictions. These minimize a finite horizon cost function and are provided by an appropriate optimal entropy coded dithered lattice vector quantizer. Within this context, we derive an equivalent noise-shaping model...

  20. Visual Localization by Place Recognition Based on Multifeature (D-λLBP++HOG

    Directory of Open Access Journals (Sweden)

    Yongliang Qiao

    2017-01-01

    Full Text Available Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS. This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP to obtain a robust global image description (D-CSLBP. In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such as perceptual aliasing. It improves visual recognition performance by taking advantage of depth, texture, and shape information. In addition, for real-time visual localization, local sensitive hashing method (LSH was used to compress the high-dimensional multifeature into binary vectors. It can thus speed up the process of image matching. To show its effectiveness, the proposed method is tested and evaluated using real datasets acquired in outdoor environments. Given the obtained results, our approach allows more effective visual localization compared with the state-of-the-art method FAB-MAP.

  1. Overview of implementation of DARPA GPU program in SAIC

    Science.gov (United States)

    Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis

    2008-04-01

    This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.

  2. Artificial intelligence systems based on texture descriptors for vaccine development.

    Science.gov (United States)

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2011-02-01

    The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.

  3. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  4. Cellular automata codebooks applied to compact image compression

    Directory of Open Access Journals (Sweden)

    Radu DOGARU

    2006-12-01

    Full Text Available Emergent computation in semi-totalistic cellular automata (CA is used to generate a set of basis (or codebook. Such codebooks are convenient for simple and circuit efficient compression schemes based on binary vector quantization, applied to the bitplanes of any monochrome or color image. Encryption is also naturally included using these codebooks. Natural images would require less than 0.5 bits per pixel (bpp while the quality of the reconstructed images is comparable with traditional compression schemes. The proposed scheme is attractive for low power, sensor integrated applications.

  5. An Adaptive Supervisory Sliding Fuzzy Cerebellar Model Articulation Controller for Sensorless Vector-Controlled Induction Motor Drive Systems

    Directory of Open Access Journals (Sweden)

    Shun-Yuan Wang

    2015-03-01

    Full Text Available This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC in the speed sensorless vector control of an induction motor (IM drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.

  6. Development of real time abdominal compression force monitoring and visual biofeedback system

    Science.gov (United States)

    Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk

    2018-03-01

    In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could

  7. Effects of Texture and Grain Size on the Yield Strength of ZK61 Alloy Rods Processed by Cyclic Extrusion and Compression.

    Science.gov (United States)

    Zhang, Lixin; Zhang, Wencong; Cao, Biao; Chen, Wenzhen; Duan, Junpeng; Cui, Guorong

    2017-10-26

    The ZK61 alloy rods with different grain sizes and crystallographic texture were successfully fabricated by cyclic extrusion and compression (CEC). Their room-temperature tension & compression yield strength displayed a significant dependence on grain size and texture, essentially attributed to {10-12} twinning. The texture variations were characterized by the angle θ between the c-axis of the grain and the extrusion direction (ED) during the process. The contour map of room-temperature yield strength as a function of grain size and the angle θ was obtained. It showed that both the tension yield strength and the compression yield strength of ZK61 alloy were fully consistent with the Hall-Patch relationship at a certain texture, but the change trends of the tension yield strength and the compression yield strength were completely opposite at the same grain size while texture altered. The friction stresses of different deformation modes calculated based on the texture confirmed the tension yield strength of the CECed ZK61 alloy rods, which was determined by both the basal slip and the tension twinning slip during the tension deformation at room temperature, while the compression yield strength was mainly determined by the basal slip during the compression deformation.

  8. Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.

    Science.gov (United States)

    Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji

    2015-12-01

    A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.

  9. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  10. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  11. A fingerprint key binding algorithm based on vector quantization and error correction

    Science.gov (United States)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  12. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  13. Neuromorphic VLSI vision system for real-time texture segregation.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2008-10-01

    The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.

  14. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    Gaudeau, Y.

    2006-12-01

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  15. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)

    2012-07-01

    Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

  16. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment

    International Nuclear Information System (INIS)

    Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.

    2012-01-01

    Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of ∼2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

  17. GPU-accelerated adjoint algorithmic differentiation

    Science.gov (United States)

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the ;tape;. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  18. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus; Krueger, Jens; Beyer, Johanna; Bruckner, Stefan

    2013-01-01

    and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems

  19. Model-based VQ for image data archival, retrieval and distribution

    Science.gov (United States)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  20. Purely Functional Compressed Bit Vectors with Applications and Implementations

    OpenAIRE

    Kaasinen, Joel

    2011-01-01

    The study of compressed data structures strives to represent information on a computer concisely — using as little space as possible. Compressed bit vectors are the simplest compressed data structure. They are used as a basis for more complex data structures with applications in, for example, computational biology. Functional programming is a programming paradigm that represents computation using functions without side-effects (such as mutation). Data structures that are representable in...

  1. A Numerical Study of Quantization-Based Integrators

    Directory of Open Access Journals (Sweden)

    Barros Fernando

    2014-01-01

    Full Text Available Adaptive step size solvers are nowadays considered fundamental to achieve efficient ODE integration. While, traditionally, ODE solvers have been designed based on discrete time machines, new approaches based on discrete event systems have been proposed. Quantization provides an efficient integration technique based on signal threshold crossing, leading to independent and modular solvers communicating through discrete events. These solvers can benefit from the large body of knowledge on discrete event simulation techniques, like parallelization, to obtain efficient numerical integration. In this paper we introduce new solvers based on quantization and adaptive sampling techniques. Preliminary numerical results comparing these solvers are presented.

  2. Artificial neural network does better spatiotemporal compressive sampling

    Science.gov (United States)

    Lee, Soo-Young; Hsu, Charles; Szu, Harold

    2012-06-01

    Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.

  3. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, R [University of Alberta, Edmonton, AB (Canada); Fallone, B [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada); MagnetTx Oncology Solutions, Edmonton, AB (Canada); St Aubin, J [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. The LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been

  4. Optimizing a mobile robot control system using GPU acceleration

    Science.gov (United States)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  5. Acoustic reverse-time migration using GPU card and POSIX thread based on the adaptive optimal finite-difference scheme and the hybrid absorbing boundary condition

    Science.gov (United States)

    Cai, Xiaohui; Liu, Yang; Ren, Zhiming

    2018-06-01

    Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.

  6. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  7. Lossless image data sequence compression using optimal context quantization

    DEFF Research Database (Denmark)

    Forchhammer, Søren; WU, Xiaolin; Andersen, Jakob Dahl

    2001-01-01

    Context based entropy coding often faces the conflict of a desire for large templates and the problem of context dilution. We consider the problem of finding the quantizer Q that quantizes the K-dimensional causal context Ci=(X(i-t1), X(i-t2), …, X(i-tK)) of a source symbol Xi into one of M...

  8. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  9. GPU Accelerated Chemical Similarity Calculation for Compound Library Comparison

    Science.gov (United States)

    Ma, Chao; Wang, Lirong; Xie, Xiang-Qun

    2012-01-01

    Chemical similarity calculation plays an important role in compound library design, virtual screening, and “lead” optimization. In this manuscript, we present a novel GPU-accelerated algorithm for all-vs-all Tanimoto matrix calculation and nearest neighbor search. By taking advantage of multi-core GPU architecture and CUDA parallel programming technology, the algorithm is up to 39 times superior to the existing commercial software that runs on CPUs. Because of the utilization of intrinsic GPU instructions, this approach is nearly 10 times faster than existing GPU-accelerated sparse vector algorithm, when Unity fingerprints are used for Tanimoto calculation. The GPU program that implements this new method takes about 20 minutes to complete the calculation of Tanimoto coefficients between 32M PubChem compounds and 10K Active Probes compounds, i.e., 324G Tanimoto coefficients, on a 128-CUDA-core GPU. PMID:21692447

  10. SU-G-TeP1-06: Fast GPU Framework for Four-Dimensional Monte Carlo in Adaptive Intensity Modulated Proton Therapy (IMPT) for Mobile Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Botas, P [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Heidelberg University, Heidelberg (Germany); Grassberger, C; Sharp, G; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Qin, N; Jia, X; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) treatment planning and verification using four-dimensional CT (4DCT) for adaptive IMPT for lung cancer patients. Methods: A validated GPU MC code, gPMC, has been linked to the patient database at our institution and employed to compute the dose-influence matrices (Dij) on the planning CT (pCT). The pCT is an average of the respiratory motion of the patient. The Dijs and patient structures were fed to the optimizer to calculate a treatment plan. To validate the plan against motion, a 4D dose distribution averaged over the possible starting phases is calculated using the 4DCT and a model of the time structure of the delivered spot map. The dose is accumulated using vector maps created by a GPU-accelerated deformable image registration program (DIR) from each phase of the 4DCT to the reference phase using the B-spline method. Calculation of the Dij matrices and the DIR are performed on a cluster, with each field and vector map calculated in parallel. Results: The Dij production takes ∼3.5s per beamlet for 10e6 protons, depending on the energy and the CT size. Generating a plan with 4D simulation of 1000 spots in 4 fields takes approximately 1h. To test the framework, IMPT plans for 10 lung cancer patients were generated for validation. Differences between the planned and the delivered dose of 19% in dose to some organs at risk and 1.4/21.1% in target mean dose/homogeneity with respect to the plan were observed, suggesting potential for improvement if adaptation is considered. Conclusion: A fast MC treatment planning framework has been developed that allows reliable plan design and verification for mobile targets and adaptation of treatment plans. This will significantly impact treatments for lung tumors, as 4D-MC dose calculations can now become part of planning strategies.

  11. On system behaviour using complex networks of a compression algorithm

    Science.gov (United States)

    Walker, David M.; Correa, Debora C.; Small, Michael

    2018-01-01

    We construct complex networks of scalar time series using a data compression algorithm. The structure and statistics of the resulting networks can be used to help characterize complex systems, and one property, in particular, appears to be a useful discriminating statistic in surrogate data hypothesis tests. We demonstrate these ideas on systems with known dynamical behaviour and also show that our approach is capable of identifying behavioural transitions within electroencephalogram recordings as well as changes due to a bifurcation parameter of a chaotic system. The technique we propose is dependent on a coarse grained quantization of the original time series and therefore provides potential for a spatial scale-dependent characterization of the data. Finally the method is as computationally efficient as the underlying compression algorithm and provides a compression of the salient features of long time series.

  12. HVS-based quantization steps for validation of digital cinema extended bitrates

    Science.gov (United States)

    Larabi, M.-C.; Pellegrin, P.; Anciaux, G.; Devaux, F.-O.; Tulet, O.; Macq, B.; Fernandez, C.

    2009-02-01

    In Digital Cinema, the video compression must be as transparent as possible to provide the best image quality to the audience. The goal of compression is to simplify transport, storing, distribution and projection of films. For all those tasks, equipments need to be developed. It is thus mandatory to reduce the complexity of the equipments by imposing limitations in the specifications. In this sense, the DCI has fixed the maximum bitrate for a compressed stream to 250 Mbps independently from the input format (4K/24fps, 2K/48fps or 2K/24fps). The work described in this paper This parameter is discussed in this paper because it is not consistent to double/quadruple the input rate without increasing the output rate. The work presented in this paper is intended to define quantization steps ensuring the visually lossless compression. Two steps are followed first to evaluate the effect of each subband separately and then to fin the scaling ratio. The obtained results show that it is necessary to increase the bitrate limit for cinema material in order to achieve the visually lossless.

  13. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    Science.gov (United States)

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  14. A Quantized Boundary Representation of 2D Flows

    KAUST Repository

    Levine, J. A.

    2012-06-01

    Analysis and visualization of complex vector fields remain major challenges when studying large scale simulation of physical phenomena. The primary reason is the gap between the concepts of smooth vector field theory and their computational realization. In practice, researchers must choose between either numerical techniques, with limited or no guarantees on how they preserve fundamental invariants, or discrete techniques which limit the precision at which the vector field can be represented. We propose a new representation of vector fields that combines the advantages of both approaches. In particular, we represent a subset of possible streamlines by storing their paths as they traverse the edges of a triangulation. Using only a finite set of streamlines creates a fully discrete version of a vector field that nevertheless approximates the smooth flow up to a user controlled error bound. The discrete nature of our representation enables us to directly compute and classify analogues of critical points, closed orbits, and other common topological structures. Further, by varying the number of divisions (quantizations) used per edge, we vary the resolution used to represent the field, allowing for controlled precision. This representation is compact in memory and supports standard vector field operations.

  15. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  16. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    Science.gov (United States)

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  17. An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

    Directory of Open Access Journals (Sweden)

    Hamza Djelouat

    2017-01-01

    Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

  18. Feature Vector Construction Method for IRIS Recognition

    Science.gov (United States)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  19. On gauge fixing and quantization of constrained Hamiltonian systems

    International Nuclear Information System (INIS)

    Dayi, O.F.

    1989-06-01

    In constrained Hamiltonian systems which possess first class constraints some subsidiary conditions should be imposed for detecting physical observables. This issue and quantization of the system are clarified. It is argued that the reduced phase space and Dirac method of quantization, generally, differ only in the definition of the Hilbert space one should use. For the dynamical systems possessing second class constraints the definition of physical Hilbert space in the BFV-BRST operator quantization method is different from the usual definition. (author). 18 refs

  20. Efficient transmission of compressed data for remote volume visualization.

    Science.gov (United States)

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  1. Self-Regular Black Holes Quantized by means of an Analogue to Hydrogen Atoms

    CERN Document Server

    Liu, Chang; Wu, Yu-Mei; Zhang, Yu-Hao

    2016-01-01

    We suggest a proposal of quantization for black holes that is based on an analogy between a black hole and a hydrogen atom. A self-regular Schwarzschild-AdS black hole is investigated, where the mass density of the extreme black hole is given by the probability density of the ground state of hydrogen atoms and the mass densities of non-extreme black holes are chosen to be the probability densities of excited states with no angular momenta. Consequently, it is logical to accept quantization of mean radii of hydrogen atoms as that of black hole horizons. In this way, quantization of total black hole masses is deduced. Furthermore, the quantum hoop conjecture and the Correspondence Principle are discussed.

  2. Towards Self-adaptation for Dependable Service-Oriented Systems

    Science.gov (United States)

    Cardellini, Valeria; Casalicchio, Emiliano; Grassi, Vincenzo; Lo Presti, Francesco; Mirandola, Raffaela

    Increasingly complex information systems operating in dynamic environments ask for management policies able to deal intelligently and autonomously with problems and tasks. An attempt to deal with these aspects can be found in the Service-Oriented Architecture (SOA) paradigm that foresees the creation of business applications from independently developed services, where services and applications build up complex dependencies. Therefore the dependability of SOA systems strongly depends on their ability to self-manage and adapt themselves to cope with changes in the operating conditions and to meet the required dependability with a minimum of resources. In this paper we propose a model-based approach to the realization of self-adaptable SOA systems, aimed at the fulfillment of dependability requirements. Specifically, we provide a methodology driving the system adaptation and we discuss the architectural issues related to its implementation. To bring this approach to fruition, we developed a prototype tool and we show the results that can be achieved with a simple example.

  3. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  4. Texture analysis of

    NARCIS (Netherlands)

    Lubsch, A.; Timmermans, K.

    2017-01-01

    Texture analysis is a method to test the physical properties of a material by tension and compression. The growing interest in commercialisation of seaweeds for human food has stimulated research into the physical properties of seaweed tissue. These are important parameters for the survival of

  5. Detecting double compression of audio signal

    Science.gov (United States)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  6. ''Massless'' vector field in de Sitter universe

    International Nuclear Information System (INIS)

    Garidi, T.; Gazeau, J.-P.; Rouhani, S.; Takook, M. V.

    2008-01-01

    We proceed to the quantization of the massless vector field in the de Sitter (dS) space. This work is the natural continuation of a previous article devoted to the quantization of the dS massive vector field [J. P. Gazeau and M. V. Takook, J. Math. Phys. 41, 5920 (2000); T. Garidi et al., ibid. 43, 6379 (2002).] The term ''massless'' is used by reference to conformal invariance and propagation on the dS lightcone whereas ''massive'' refers to those dS fields which unambiguously contract to Minkowskian massive fields at zero curvature. Due to the combined occurrences of gauge invariance and indefinite metric, the covariant quantization of the massless vector field requires an indecomposable representation of the de Sitter group. We work with the gauge fixing corresponding to the simplest Gupta-Bleuler structure. The field operator is defined with the help of coordinate-independent de Sitter waves (the modes). The latter are simple to manipulate and most adapted to group theoretical approaches. The physical states characterized by the divergencelessness condition are, for instance, easy to identify. The whole construction is based on analyticity requirements in the complexified pseudo-Riemannian manifold for the modes and the two-point function

  7. ``Massless'' vector field in de Sitter universe

    Science.gov (United States)

    Garidi, T.; Gazeau, J.-P.; Rouhani, S.; Takook, M. V.

    2008-03-01

    We proceed to the quantization of the massless vector field in the de Sitter (dS) space. This work is the natural continuation of a previous article devoted to the quantization of the dS massive vector field [J. P. Gazeau and M. V. Takook, J. Math. Phys. 41, 5920 (2000); T. Garidi et al., ibid. 43, 6379 (2002).] The term ``massless'' is used by reference to conformal invariance and propagation on the dS lightcone whereas ``massive'' refers to those dS fields which unambiguously contract to Minkowskian massive fields at zero curvature. Due to the combined occurrences of gauge invariance and indefinite metric, the covariant quantization of the massless vector field requires an indecomposable representation of the de Sitter group. We work with the gauge fixing corresponding to the simplest Gupta-Bleuler structure. The field operator is defined with the help of coordinate-independent de Sitter waves (the modes). The latter are simple to manipulate and most adapted to group theoretical approaches. The physical states characterized by the divergencelessness condition are, for instance, easy to identify. The whole construction is based on analyticity requirements in the complexified pseudo-Riemannian manifold for the modes and the two-point function.

  8. Compressive pre-stress effects on magnetostrictive behaviors of highly textured Galfenol and Alfenol thin sheets

    Directory of Open Access Journals (Sweden)

    Julia R. Downing

    2017-05-01

    Full Text Available Fe-Ga (Galfenol and Fe-Al (Alfenol are rare-earth-free magnetostrictive alloys with mechanical robustness and strong magnetoelastic coupling. Since highly textured Galfenol and Alfenol thin sheets along orientations have been developed with magnetostrictive performances of ∼270 ppm and ∼160 ppm, respectively, they have been of great interest in sensor and energy harvesting applications. In this work, we investigate stress-dependent magnetostrictive behaviors in highly textured rolled sheets of NbC-added Fe80Al20 and Fe81Ga19 alloys with a single (011 grain coverage of ∼90%. A compact fixture was designed and used to introduce a uniform compressive pre-stress to those thin sheet samples along a [100] direction. As compressive pre-stress was increased to above 100 MPa, the maximum observed magnetostriction increased 42% in parallel magnetostriction along the stress direction, λ//, in highly textured (011 Fe81Ga19 thin sheets for a compressive pre-stress of 60 MPa. The same phenomena were observed for (011 Fe80Al20 (maximum increase of 88% with a 49 MPa compressive stress. This trend is shown to be consistent with published results on the effect of pre-stress on magnetostriction in rods of single crystal and textured polycrystalline Fe-Ga alloy of similar compositions, and single crystal data gathered using our experimental set up. Interestingly, the saturating field (Hs does not vary with pre-stresses, while the saturating field in rod-shaped samples of Fe-Ga increases with an increase of pre-stress. This suggests that for a range of compressive pre-stresses, thin sheet samples have larger values of d33 transduction coefficients and susceptibility than rod-shaped samples of similar alloy compositions, and hence they should provide performance benefits when used in sensor and actuator device applications. Thus, we discuss potential reasons for the unexpected trends in Hs with pre-stress, and present preliminary results from tests conducted

  9. Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation

    National Research Council Canada - National Science Library

    Premus, Vincent

    2001-01-01

    ... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...

  10. Beamspace Adaptive Beamforming for Hydrodynamic Towed Array Self-Noise Cancellation

    National Research Council Canada - National Science Library

    Premus, Vincent

    2000-01-01

    ... against signal self-nulling associated with steering vector mismatch. Particular attention is paid to the definition of white noise gain as the metric that reflects the level of mainlobe adaptive nulling for an adaptive beamformer...

  11. Mimicking human texture classification

    NARCIS (Netherlands)

    Rogowitz, B.E.; van Rikxoort, Eva M.; van den Broek, Egon; Pappas, T.N.; Schouten, Theo E.; Daly, S.J.

    2005-01-01

    In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was

  12. Deformation quantization of principal fibre bundles

    International Nuclear Information System (INIS)

    Weiss, S.

    2007-01-01

    Deformation quantization is an algebraic but still geometrical way to define noncommutative spacetimes. In order to investigate corresponding gauge theories on such spaces, the geometrical formulation in terms of principal fibre bundles yields the appropriate framework. In this talk I will explain what should be understood by a deformation quantization of principal fibre bundles and how associated vector bundles arise in this context. (author)

  13. A physically motivated quantization of the electromagnetic field

    International Nuclear Information System (INIS)

    Bennett, Robert; Barlow, Thomas M; Beige, Almut

    2016-01-01

    The notion that the electromagnetic field is quantized is usually inferred from observations such as the photoelectric effect and the black-body spectrum. However accounts of the quantization of this field are usually mathematically motivated and begin by introducing a vector potential, followed by the imposition of a gauge that allows the manipulation of the solutions of Maxwell’s equations into a form that is amenable for the machinery of canonical quantization. By contrast, here we quantize the electromagnetic field in a less mathematically and more physically motivated way. Starting from a direct description of what one sees in experiments, we show that the usual expressions of the electric and magnetic field observables follow from Heisenberg’s equation of motion. In our treatment, there is no need to invoke the vector potential in a specific gauge and we avoid the commonly used notion of a fictitious cavity that applies boundary conditions to the field. (paper)

  14. Classification of Laser Induced Fluorescence Spectra from Normal and Malignant bladder tissues using Learning Vector Quantization Neural Network in Bladder Cancer Diagnosis

    DEFF Research Database (Denmark)

    Karemore, Gopal Raghunath; Mascarenhas, Kim Komal; Patil, Choudhary

    2008-01-01

    In the present work we discuss the potential of recently developed classification algorithm, Learning Vector Quantization (LVQ), for the analysis of Laser Induced Fluorescence (LIF) Spectra, recorded from normal and malignant bladder tissue samples. The algorithm is prototype based and inherently...

  15. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Lo, S.C.; Huang, H.K.

    1986-01-01

    The full-frame bit-allocation algorithm for radiological image compression can achieve an acceptable compression ratio as high as 30:1. It involves two stages of operation: a two-dimensional discrete cosine transform and pixel quantization in the transformed space with pixel depth kept accountable by a bit-allocation table. The cosine transform hardware design took an expandable modular approach based on the VME bus system with a maximum data transfer rate of 48 Mbytes/sec and a microprocessor (Motorola 68000 family). The modules are cascadable and microprogrammable to perform 1,024-point butterfly operations. A total of 18 stages would be required for transforming a 1,000 x 1,000 image. Multiplicative constants and addressing sequences are to be software loaded into the parameter buffers of each stage prior to streaming data through the processor stages. The compression rate for 1K x 1K images is expected to be faster than one image per sec

  16. A new Self-Adaptive disPatching System for local clusters

    Science.gov (United States)

    Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng

    2015-12-01

    The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.

  17. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  18. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    Science.gov (United States)

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    Science.gov (United States)

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  20. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    Directory of Open Access Journals (Sweden)

    Shaowu Pan

    2015-04-01

    Full Text Available A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT, which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  1. Consensus of second-order multi-agent dynamic systems with quantized data

    Energy Technology Data Exchange (ETDEWEB)

    Guan, Zhi-Hong, E-mail: zhguan@mail.hust.edu.cn [Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074 (China); Meng, Cheng [Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074 (China); Liao, Rui-Quan [Petroleum Engineering College,Yangtze University, Jingzhou, 420400 (China); Zhang, Ding-Xue, E-mail: zdx7773@163.com [Petroleum Engineering College,Yangtze University, Jingzhou, 420400 (China)

    2012-01-09

    The consensus problem of second-order multi-agent systems with quantized link is investigated in this Letter. Some conditions are derived for the quantized consensus of the second-order multi-agent systems by the stability theory. Moreover, a result characterizing the relationship between the eigenvalues of the Laplacians matrix and the quantized consensus is obtained. Examples are given to illustrate the theoretical analysis. -- Highlights: ► A second-order multi-agent model with quantized data is proposed. ► Two sufficient and necessary conditions are obtained. ► The relationship between the eigenvalues of the Laplacians matrix and the quantized consensus is discovered.

  2. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  3. Inequivalent quantizations and fundamentally perfect spaces

    International Nuclear Information System (INIS)

    Imbo, T.D.; Sudarshan, E.C.G.

    1987-06-01

    We investigate the problem of inequivalent quantizations of a physical system with multiply connected configuration space X. For scalar quantum theory on X we show that state vectors must be single-valued if and only if the first homology group H 1 (X) is trivial, or equivalently the fundamental group π 1 (X) is perfect. The θ-structure of quantum gauge and gravitational theories is discussed in light of this result

  4. On the quantization of classically chaotic system

    International Nuclear Information System (INIS)

    Godoy, N.F. de.

    1988-01-01

    Some propeties of a quantization in terms of observables of a classically chaotic system, which exhibits a strange are studied. It is shown in particular that convenient expected values of some observables have the correct classical limit and that in these cases the limits ℎ → O and t → ∞ (t=time) rigorously comute. This model was alternatively quantized by R.Graham in terms of Wigner function. The Graham's analysis is completed a few points, in particular, we find out a remarkable analogy with general results about the semi-classical limit of Wigner function. Finally the expected values obtained by both methods of quantization were compared. (author) [pt

  5. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    Science.gov (United States)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  6. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    Science.gov (United States)

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.

  7. Emotional effects of dynamic textures

    NARCIS (Netherlands)

    Toet, A.; Henselmans, M.; Lucassen, M.P.; Gevers, T.

    2011-01-01

    This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely

  8. 2D-RBUC for efficient parallel compression of residuals

    Science.gov (United States)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  9. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  10. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    Science.gov (United States)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time

  11. Image Vector Quantization codec indexes filtering

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay Abdelmounaim

    2012-01-01

    Full Text Available Vector Quantisation (VQ is an efficient coding algorithm that has been widely used in the field of video and image coding, due to its fast decoding efficiency. However, the indexes of VQ are sometimes lost because of signal interference during the transmission. In this paper, we propose an efficient estimation method to conceal and recover the lost indexes on the decoder side, to avoid re-transmitting the whole image again. If the image or video has the limitation of a period of validity, re-transmitting the data wastes the resources of time and network bandwidth. Therefore, using the originally received correct data to estimate and recover the lost data is efficient in time-constrained situations, such as network conferencing or mobile transmissions. In nature images, the pixels are correlated with their neighbours and VQ partitions the image into sub-blocks and quantises them to the indexes that are transmitted; the correlation between adjacent indexes is very strong. There are two parts of the proposed method. The first is pre-processing and the second is an estimation process. In pre-processing, we modify the order of codevectors in the VQ codebook to increase the correlation among the neighbouring vectors. We then use a special filtering method in the estimation process. Using conventional VQ to compress the Lena image and transmit it without any loss of index can achieve a PSNR of 30.429 dB on the decoder. The simulation results demonstrate that our method can estimate the indexes to achieve PSNR values of 29.084 and 28.327 dB when the loss rate is 0.5% and 1%, respectively.

  12. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    Science.gov (United States)

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Human Adaptive Mechatronics and Human-System Modelling

    Directory of Open Access Journals (Sweden)

    Satoshi Suzuki

    2013-03-01

    Full Text Available Several topics in projects for mechatronics studies, which are 'Human Adaptive Mechatronics (HAM' and 'Human-System Modelling (HSM', are presented in this paper. The main research theme of the HAM project is a design strategy for a new intelligent mechatronics system, which enhances operators' skills during machine operation. Skill analyses and control system design have been addressed. In the HSM project, human modelling based on hierarchical classification of skills was studied, including the following five types of skills: social, planning, cognitive, motion and sensory-motor skills. This paper includes digests of these research topics and the outcomes concerning each type of skill. Relationships with other research activities, knowledge and information that will be helpful for readers who are trying to study assistive human-mechatronics systems are also mentioned.

  14. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    International Nuclear Information System (INIS)

    Nutku, Yavuz

    2003-01-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems

  15. Irreversible data compression concepts with polynomial fitting in time-order of particle trajectory for visualization of huge particle system

    International Nuclear Information System (INIS)

    Ohtani, H; Ito, A M; Hagita, K; Kato, T; Saitoh, T; Takeda, T

    2013-01-01

    We propose in this paper a data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems. Our data compression concepts deal with the data of particle orbits obtained by simulation directly and have the following features: (i) Through control over the compression scheme, the difference between the simulation variables and the reconstructed values for the visualization from the compressed data becomes smaller than a given constant. (ii) The particles in the simulation are regarded as independent particles and the time-series data for each particle is compressed with an independent time-step for the particle. (iii) A particle trajectory is approximated by a polynomial function based on the characteristic motion of the particle. It is reconstructed as a continuous curve through interpolation from the values of the function for intermediate values of the sample data. We name this concept ''TOKI (Time-Order Kinetic Irreversible compression)''. In this paper, we present an example of an implementation of a data-compression scheme with the above features. Several application results are shown for plasma and galaxy formation simulation data

  16. A multi-GPU implementation of a D2Q37 lattice Boltzmann code

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Scagliarini, Andrea; Schifano, S.F.; Toschi, F.; Tripiccione, R.; Wyrzykowski, R.; Dongarra, J.; Karczewski, K.; Wasniewski, J.

    2012-01-01

    We describe a parallel implementation of a compressible Lattice Boltzmann code on a multi-GPU cluster based on Nvidia Fermi processors. We analyze how to optimize the algorithm for GP-GPU architectures, describe the implementation choices that we have adopted and compare our performance results with

  17. Models for discrete-time self-similar vector processes with application to network traffic

    Science.gov (United States)

    Lee, Seungsin; Rao, Raghuveer M.; Narasimha, Rajesh

    2003-07-01

    The paper defines self-similarity for vector processes by employing the discrete-time continuous-dilation operation which has successfully been used previously by the authors to define 1-D discrete-time stochastic self-similar processes. To define self-similarity of vector processes, it is required to consider the cross-correlation functions between different 1-D processes as well as the autocorrelation function of each constituent 1-D process in it. System models to synthesize self-similar vector processes are constructed based on the definition. With these systems, it is possible to generate self-similar vector processes from white noise inputs. An important aspect of the proposed models is that they can be used to synthesize various types of self-similar vector processes by choosing proper parameters. Additionally, the paper presents evidence of vector self-similarity in two-channel wireless LAN data and applies the aforementioned systems to simulate the corresponding network traffic traces.

  18. Multimodality imaging and state-of-art GPU technology in discriminating benign from malignant breast lesions on real time decision support system

    International Nuclear Information System (INIS)

    Kostopoulos, S; Glotsos, D; Kalatzis, I; Asvestas, P; Cavouras, D; Sidiropoulos, K; Dimitropoulos, N

    2014-01-01

    The aim of this study was to design a pattern recognition system for assisting the diagnosis of breast lesions, using image information from Ultrasound (US) and Digital Mammography (DM) imaging modalities. State-of-art computer technology was employed based on commercial Graphics Processing Unit (GPU) cards and parallel programming. An experienced radiologist outlined breast lesions on both US and DM images from 59 patients employing a custom designed computer software application. Textural features were extracted from each lesion and were used to design the pattern recognition system. Several classifiers were tested for highest performance in discriminating benign from malignant lesions. Classifiers were also combined into ensemble schemes for further improvement of the system's classification accuracy. Following the pattern recognition system optimization, the final system was designed employing the Probabilistic Neural Network classifier (PNN) on the GPU card (GeForce 580GTX) using CUDA programming framework and C++ programming language. The use of such state-of-art technology renders the system capable of redesigning itself on site once additional verified US and DM data are collected. Mixture of US and DM features optimized performance with over 90% accuracy in correctly classifying the lesions

  19. Design of a Two-level Adaptive Multi-Agent System for Malaria Vectors driven by an ontology

    Directory of Open Access Journals (Sweden)

    Etang Josiane

    2007-07-01

    Full Text Available Abstract Background The understanding of heterogeneities in disease transmission dynamics as far as malaria vectors are concerned is a big challenge. Many studies while tackling this problem don't find exact models to explain the malaria vectors propagation. Methods To solve the problem we define an Adaptive Multi-Agent System (AMAS which has the property to be elastic and is a two-level system as well. This AMAS is a dynamic system where the two levels are linked by an Ontology which allows it to function as a reduced system and as an extended system. In a primary level, the AMAS comprises organization agents and in a secondary level, it is constituted of analysis agents. Its entry point, a User Interface Agent, can reproduce itself because it is given a minimum of background knowledge and it learns appropriate "behavior" from the user in the presence of ambiguous queries and from other agents of the AMAS in other situations. Results Some of the outputs of our system present a series of tables, diagrams showing some factors like Entomological parameters of malaria transmission, Percentages of malaria transmission per malaria vectors, Entomological inoculation rate. Many others parameters can be produced by the system depending on the inputted data. Conclusion Our approach is an intelligent one which differs from statistical approaches that are sometimes used in the field. This intelligent approach aligns itself with the distributed artificial intelligence. In terms of fight against malaria disease our system offers opportunities of reducing efforts of human resources who are not obliged to cover the entire territory while conducting surveys. Secondly the AMAS can determine the presence or the absence of malaria vectors even when specific data have not been collected in the geographical area. In the difference of a statistical technique, in our case the projection of the results in the field can sometimes appeared to be more general.

  20. Sparse Vector Distributions and Recovery from Compressed Sensing

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

  1. arXiv The prototype of the HL-LHC magnets monitoring system based on Recurrent Neural Networks and adaptive quantization

    CERN Document Server

    Wielgosz, Maciej; Skoczeń, Andrzej

    This paper focuses on an examination of an applicability of Recurrent Neural Network models for detecting anomalous behavior of the CERN superconducting magnets. In order to conduct the experiments, the authors designed and implemented an adaptive signal quantization algorithm and a custom GRU-based detector and developed a method for the detector parameters selection. Three different datasets were used for testing the detector. Two artificially generated datasets were used to assess the raw performance of the system whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets was intended for real-life experiments and model training. Several different setups of the developed anomaly detection system were evaluated and compared with state-of-the-art OC-SVM reference model operating on the same data. The OC-SVM model was equipped with a rich set of feature extractors accounting for a range of the input signal properties. It was determined in the course of the experiments that the detector, a...

  2. Semiclassical quantization of nonadiabatic systems with hopping periodic orbits

    International Nuclear Information System (INIS)

    Fujii, Mikiya; Yamashita, Koichi

    2015-01-01

    We present a semiclassical quantization condition, i.e., quantum–classical correspondence, for steady states of nonadiabatic systems consisting of fast and slow degrees of freedom (DOFs) by extending Gutzwiller’s trace formula to a nonadiabatic form. The quantum–classical correspondence indicates that a set of primitive hopping periodic orbits, which are invariant under time evolution in the phase space of the slow DOF, should be quantized. The semiclassical quantization is then applied to a simple nonadiabatic model and accurately reproduces exact quantum energy levels. In addition to the semiclassical quantization condition, we also discuss chaotic dynamics involved in the classical limit of nonadiabatic dynamics

  3. Identifikasi Telapak Tangan menggunakan Jaringan Syaraf Tiruan Learning Vector Quantization (LVQ

    Directory of Open Access Journals (Sweden)

    Sutikno Sutikno

    2016-11-01

    Full Text Available Sistem pengenalan diri (personal recognition adalah sebuah sistem untuk mengenali identitas seseorang secara otomatis dengan menggunakan computer dengan kata sandi (password, ID card, atau PIN untuk mengidentifikasi seseorang. Namun,pengenalan diri dengan sistem tersebut memiliki beberapa kelemahan yaitu dapat dicuri dan mudah diduplikasi, memiliki kemungkinan seseorang untuk lupa dan beberapa password dapat diperkirakan sehingga dapat dimanfaatkan oleh orang-orang yang tidak bertanggungjawab. Untuk dapat mengenali seseorang secara otomatis dapat dilakukan secara komputasi, yaitu dengan menggunakan jaringan syaraf tiruan. Penelitian ini mengimplementasikan metode jaringan syaraf tiruan Learning Vector Quantization dengan objek pengenalan yaitu telapak tangan. Dalam penelitian ini model proses pengembangan perangkat lunak yang digunakan adalah Waterfall, sedangkan bahasa pemrograman yang digunakan adalah Matlab, dan sistem manajemen basis datanya adalah Microsoft Access. Keluaran dari aplikasi yang dikembangkan adalah identifikasi telapak tangan user. Dari hasil pengujian, tingkat akurasi dari aplikasi ini sebesar 74,66% dalam membedakan antar user yang satu dengan yang lain.

  4. Formal connections in deformation quantization

    DEFF Research Database (Denmark)

    Masulli, Paolo

    The field of this thesis is deformation quantization, and we consider mainly symplectic manifolds equipped with a star product. After reviewing basics in complex geometry, we introduce quantization, focusing on geometric quantization and deformation quantization. The latter is defined as a star...... characteristic class, and that formal connections form an affine space over the derivations of the star products. Moreover, if the parameter space for the family of star products is contractible, we obtain that any two flat formal connections are gauge equivalent via a self-equivalence of the family of star...

  5. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Directory of Open Access Journals (Sweden)

    Nuha A. S. Alwan

    2013-01-01

    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  6. Adaptive Texture Synthesis for Large Scale City Modeling

    Science.gov (United States)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  7. ROBUST CONTROL ALGORITHM FOR MULTIVARIABLE PLANTS WITH QUANTIZED OUTPUT

    Directory of Open Access Journals (Sweden)

    A. A. Margun

    2017-01-01

    Full Text Available The paper deals with robust output control algorithm for multivariable plants under disturbances. A plant is described by the system of linear differential equations with known relative degrees. Plant parameters are unknown but belong to the known closed bounded set. Plant state vector is unmeasured. Plant output is measured only via static quantizer. Control system algorithm is based on the high gain feedback method. Developed controller provides exponential convergence of tracking error to the bounded area. The area bounds depend on quantizer parameters and the value of external disturbances. Experimental approbation of the proposed control algorithm is performed with the use of Twin Rotor MIMO System laboratory bench. This bench is a helicopter like model with two degrees of freedom (pitch and yaw. DC motors are used as actuators. The output signals are measured via optical encoders. Mathematical model of laboratory bench is obtained. Proposed algorithm was compared with proportional - integral – differential controller in conditions of output quantization. Obtained results have confirmed the efficiency of proposed controller.

  8. INLINING 3D RECONSTRUCTION, MULTI-SOURCE TEXTURE MAPPING AND SEMANTIC ANALYSIS USING OBLIQUE AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    D. Frommholz

    2016-06-01

    Full Text Available This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM. The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric

  9. Self-Control and Impulsiveness in Nondieting Adult Human Females: Effects of Visual Food Cues and Food Deprivation

    Science.gov (United States)

    Forzano, Lori-Ann B.; Chelonis, John J.; Casey, Caitlin; Forward, Marion; Stachowiak, Jacqueline A.; Wood, Jennifer

    2010-01-01

    Self-control can be defined as the choice of a larger, more delayed reinforcer over a smaller, less delayed reinforcer, and impulsiveness as the opposite. Previous research suggests that exposure to visual food cues affects adult humans' self-control. Previous research also suggests that food deprivation decreases adult humans' self-control. The…

  10. Canonical quantization of so-called non-Lagrangian systems

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M. [Universidade de Sao Paulo, Instituto de Fisica, Caixa Postal 66318-CEP, Sao Paulo, S.P. (Brazil); Kupriyanov, V.G. [Universidade de Sao Paulo, Instituto de Fisica, Caixa Postal 66318-CEP, Sao Paulo, S.P. (Brazil); Tomsk State University, Physics Department, Tomsk (Russian Federation)

    2007-04-15

    We present an approach to the canonical quantization of systems with equations of motion that are historically called non-Lagrangian equations. Our viewpoint of this problem is the following: despite the fact that a set of differential equations cannot be directly identified with a set of Euler-Lagrange equations, one can reformulate such a set in an equivalent first-order form that can always be treated as the Euler-Lagrange equations of a certain action. We construct such an action explicitly. It turns out that in the general case the hamiltonization and canonical quantization of such an action are non-trivial problems, since the theory involves time-dependent constraints. We adopt the general approach of hamiltonization and canonical quantization for such theories as described in D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints (Springer, Berlin, 1990). to the case under consideration. There exists an ambiguity (that cannot be reduced to the addition of a total time derivative) in associating a Lagrange function with a given set of equations. We present a complete description of this ambiguity. The proposed scheme is applied to the quantization of a general quadratic theory. In addition, we consider the quantization of a damped oscillator and of a radiating point-like charge. (orig.)

  11. Canonical quantization of so-called non-Lagrangian systems

    International Nuclear Information System (INIS)

    Gitman, D.M.; Kupriyanov, V.G.

    2007-01-01

    We present an approach to the canonical quantization of systems with equations of motion that are historically called non-Lagrangian equations. Our viewpoint of this problem is the following: despite the fact that a set of differential equations cannot be directly identified with a set of Euler-Lagrange equations, one can reformulate such a set in an equivalent first-order form that can always be treated as the Euler-Lagrange equations of a certain action. We construct such an action explicitly. It turns out that in the general case the hamiltonization and canonical quantization of such an action are non-trivial problems, since the theory involves time-dependent constraints. We adopt the general approach of hamiltonization and canonical quantization for such theories as described in D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints (Springer, Berlin, 1990). to the case under consideration. There exists an ambiguity (that cannot be reduced to the addition of a total time derivative) in associating a Lagrange function with a given set of equations. We present a complete description of this ambiguity. The proposed scheme is applied to the quantization of a general quadratic theory. In addition, we consider the quantization of a damped oscillator and of a radiating point-like charge. (orig.)

  12. Perceptual asymmetry in texture perception.

    OpenAIRE

    Williams, D; Julesz, B

    1992-01-01

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for...

  13. Implementation and Optimization of GPU-Based Static State Security Analysis in Power Systems

    Directory of Open Access Journals (Sweden)

    Yong Chen

    2017-01-01

    Full Text Available Static state security analysis (SSSA is one of the most important computations to check whether a power system is in normal and secure operating state. It is a challenge to satisfy real-time requirements with CPU-based concurrent methods due to the intensive computations. A sensitivity analysis-based method with Graphics processing unit (GPU is proposed for power systems, which can reduce calculation time by 40% compared to the execution on a 4-core CPU. The proposed method involves load flow analysis and sensitivity analysis. In load flow analysis, a multifrontal method for sparse LU factorization is explored on GPU through dynamic frontal task scheduling between CPU and GPU. The varying matrix operations during sensitivity analysis on GPU are highly optimized in this study. The results of performance evaluations show that the proposed GPU-based SSSA with optimized matrix operations can achieve a significant reduction in computation time.

  14. Effects of texture on shear band formation in plane strain tension/compression and bending

    DEFF Research Database (Denmark)

    Kuroda, M.; Tvergaard, Viggo

    2007-01-01

    In this study, effects of typical texture components observed in rolled aluminum alloy sheets on shear band formation in plane strain tension/compression and bending are systematically studied. The material response is described by a generalized Taylor-type polycrystal model, in which each grain ...... shear band formation in bent specimens is compared to that in the tension/compression problem. Finally, the present results are compared to previous related studies, and the efficiency of the present method for materials design in future is discussed....

  15. Light and dark adaptation of visually perceived eye level controlled by visual pitch.

    Science.gov (United States)

    Matin, L; Li, W

    1995-01-01

    The pitch of a visual field systematically influences the elevation at which a monocularly viewing subject sets a target so as to appear at visually perceived eye level (VPEL). The deviation of the setting from true eye level average approximately 0.6 times the angle of pitch while viewing a fully illuminated complexly structured visual field and is only slightly less with one or two pitched-from-vertical lines in a dark field (Matin & Li, 1994a). The deviation of VPEL from baseline following 20 min of dark adaptation reaches its full value less than 1 min after the onset of illumination of the pitched visual field and decays exponentially in darkness following 5 min of exposure to visual pitch, either 30 degrees topbackward or 20 degrees topforward. The magnitude of the VPEL deviation measured with the dark-adapted right eye following left-eye exposure to pitch was 85% of the deviation that followed pitch exposure of the right eye itself. Time constants for VPEL decay to the dark baseline were the same for same-eye and cross-adaptation conditions and averaged about 4 min. The time constants for decay during dark adaptation were somewhat smaller, and the change during dark adaptation extended over a 16% smaller range following the viewing of the dim two-line pitched-from-vertical stimulus than following the viewing of the complex field. The temporal course of light and dark adaptation of VPEL is virtually identical to the course of light and dark adaptation of the scotopic luminance threshold following exposure to the same luminance. We suggest that, following rod stimulation along particular retinal orientations by portions of the pitched visual field, the storage of the adaptation process resides in the retinogeniculate system and is manifested in the focal system as a change in luminance threshold and in the ambient system as a change in VPEL. The linear model previously developed to account for VPEL, which was based on the interaction of influences from the

  16. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  17. New quantization matrices for JPEG steganography

    Science.gov (United States)

    Yildiz, Yesna O.; Panetta, Karen; Agaian, Sos

    2007-04-01

    Modern steganography is a secure communication of information by embedding a secret-message within a "cover" digital multimedia without any perceptual distortion to the cover media, so the presence of the hidden message is indiscernible. Recently, the Joint Photographic Experts Group (JPEG) format attracted the attention of researchers as the main steganographic format due to the following reasons: It is the most common format for storing images, JPEG images are very abundant on the Internet bulletin boards and public Internet sites, and they are almost solely used for storing natural images. Well-known JPEG steganographic algorithms such as F5 and Model-based Steganography provide high message capacity with reasonable security. In this paper, we present a method to increase security using JPEG images as the cover medium. The key element of the method is using a new parametric key-dependent quantization matrix. This new quantization table has practically the same performance as the JPEG table as far as compression ratio and image statistics. The resulting image is indiscernible from an image that was created using the JPEG compression algorithm. This paper presents the key-dependent quantization table algorithm and then analyzes the new table performance.

  18. On quantization of time-dependent systems with constraints

    International Nuclear Information System (INIS)

    Gadjiev, S A; Jafarov, R G

    2007-01-01

    The Dirac method of canonical quantization of theories with second-class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from the physical equivalent theory with trivial non-stationarity

  19. On quantization of time-dependent systems with constraints

    International Nuclear Information System (INIS)

    Hadjialieva, F.G.; Jafarov, R.G.

    1993-07-01

    The Dirac method of canonical quantization of theories with second class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from physical equivalent theory with trivial nonstationarity. (author). 4 refs

  20. On quantization of time-dependent systems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Gadjiev, S A; Jafarov, R G [Institute for Physical Problems, Baku State University, AZ11 48 Baku (Azerbaijan)

    2007-03-30

    The Dirac method of canonical quantization of theories with second-class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from the physical equivalent theory with trivial non-stationarity.

  1. High performance MRI simulations of motion on multi-GPU systems.

    Science.gov (United States)

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  2. Adaptive multiresolution Hermite-Binomial filters for image edge and texture analysis

    NARCIS (Netherlands)

    Gu, Y.H.; Katsaggelos, A.K.

    1994-01-01

    A new multiresolution image analysis approach using adaptive Hermite-Binomial filters is presented in this paper. According to the local image structural and textural properties, the analysis filter kernels are made adaptive both in their scales and orders. Applications of such an adaptive filtering

  3. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  4. ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING

    Directory of Open Access Journals (Sweden)

    G. Despine

    2015-02-01

    Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  5. Vulnerable GPU Memory Management: Towards Recovering Raw Data from GPU

    Directory of Open Access Journals (Sweden)

    Zhou Zhe

    2017-04-01

    Full Text Available According to previous reports, information could be leaked from GPU memory; however, the security implications of such a threat were mostly over-looked, because only limited information could be indirectly extracted through side-channel attacks. In this paper, we propose a novel algorithm for recovering raw data directly from the GPU memory residues of many popular applications such as Google Chrome and Adobe PDF reader. Our algorithm enables harvesting highly sensitive information including credit card numbers and email contents from GPU memory residues. Evaluation results also indicate that nearly all GPU-accelerated applications are vulnerable to such attacks, and adversaries can launch attacks without requiring any special privileges both on traditional multi-user operating systems, and emerging cloud computing scenarios.

  6. Visualizing vector field topology in fluid flows

    Science.gov (United States)

    Helman, James L.; Hesselink, Lambertus

    1991-01-01

    Methods of automating the analysis and display of vector field topology in general and flow topology in particular are discussed. Two-dimensional vector field topology is reviewed as the basis for the examination of topology in three-dimensional separated flows. The use of tangent surfaces and clipping in visualizing vector field topology in fluid flows is addressed.

  7. Feature-aware natural texture synthesis

    KAUST Repository

    Wu, Fuzhang

    2014-12-04

    This article presents a framework for natural texture synthesis and processing. This framework is motivated by the observation that given examples captured in natural scene, texture synthesis addresses a critical problem, namely, that synthesis quality can be affected adversely if the texture elements in an example display spatially varied patterns, such as perspective distortion, the composition of different sub-textures, and variations in global color pattern as a result of complex illumination. This issue is common in natural textures and is a fundamental challenge for previously developed methods. Thus, we address it from a feature point of view and propose a feature-aware approach to synthesize natural textures. The synthesis process is guided by a feature map that represents the visual characteristics of the input texture. Moreover, we present a novel adaptive initialization algorithm that can effectively avoid the repeat and verbatim copying artifacts. Our approach improves texture synthesis in many images that cannot be handled effectively with traditional technologies.

  8. QUANTIZATION OF NON-LAGRANGIAN SYSTEMS

    Czech Academy of Sciences Publication Activity Database

    Kochan, Denis

    2009-01-01

    Roč. 24, 28-29 (2009), s. 5319-5340 ISSN 0217-751X R&D Projects: GA MŠk(CZ) LC06002 Institutional research plan: CEZ:AV0Z10480505 Keywords : dissipative quantization * non-Lagrangian system * umbilical string Subject RIV: BE - Theoretical Physics Impact factor: 0.941, year: 2009

  9. Development of a psychosocial adaptation questionnaire for Chinese patients with visual impairments.

    Science.gov (United States)

    Zhang, Xiu-jie; Wang, Ai-ping

    2011-10-01

    To develop a psychosocial adaptation questionnaire for Chinese patients with visual impairments and to examine its reliability and validity. Psychosocial adaptation with disease has been studied, however, there have been few reports on the impact of visual impairment on psychosocial adaptation. An instrument has not been developed to assess psychosocial adaptation with visual impairment specifically for patients in China. Both qualitative and quantitative research methods were used. A questionnaire was developed based on the concept of psychosocial adaptation with visual impairment. Items for the questionnaire were developed by reviewing the literature and carrying out a semi-structured interview with 12 visually impaired patients. Five ophthalmologists and ten patients evaluated the content validity and face validity of the questionnaire, respectively. The method of convenient sampling was used to select 213 visually impaired patients in the Ophthalmology Department of the First Affiliated Hospital of China Medical University to participate in the study. Discriminative index and item-total correlation analyses were used to delete items that were lower than a set criterion. Regarding construct validity, factor analysis was performed. The Self-rating Anxiety Scale (SAS), General Self-Efficacy Scale (GSES) and Self Acceptance Questionnaire (SAQ) were used to evaluate criterion validity. Cronbach's alpha coefficient was used as an index of internal consistency. To evaluate test-retest reliability, 50 patients were re-evaluated after 24 hours. A total of 204 questionnaire items were created. 22 items were deleted by discriminative index and item-total correlation before factor analysis; 38 items were entered into the model for factor analysis. Seven factors were extracted by using principal factor analysis and varimax rotation, with a cumulative contribution of 59·18%. The correlation coefficients between the psychosocial adaptation questionnaire for visual impairment

  10. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  11. N-body simulation for self-gravitating collisional systems with a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions

    Science.gov (United States)

    Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo

    2012-02-01

    We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.

  12. Design of an Adaptive Human-Machine System Based on Dynamical Pattern Recognition of Cognitive Task-Load.

    Science.gov (United States)

    Zhang, Jianhua; Yin, Zhong; Wang, Rubin

    2017-01-01

    This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.

  13. Studies in geometric quantization

    International Nuclear Information System (INIS)

    Tuynman, G.M.

    1988-01-01

    This thesis contains five chapters, of which the first, entitled 'What is prequantization, and what is geometric quantization?', is meant as an introduction to geometric quantization for the non-specialist. The second chapter, entitled 'Central extensions and physics' deals with the notion of central extensions of manifolds and elaborates and proves the statements made in the first chapter. Central extensions of manifolds occur in physics as the freedom of a phase factor in the quantum mechanical state vector, as the phase factor in the prequantization process of classical mechanics and it appears in mathematics when studying central extension of Lie groups. In this chapter the connection between these central extensions is investigated and a remarkable similarity between classical and quantum mechanics is shown. In chapter three a classical model is given for the hydrogen atom including spin-orbit and spin-spin interaction. The method of geometric quantization is applied to this model and the results are discussed. In the final chapters (4 and 5) an explicit method to calculate the operators corresponding to classical observables is given when the phase space is a Kaehler manifold. The obtained formula are then used to quantise symplectic manifolds which are irreducible hermitian symmetric spaces and the results are compared with other quantization procedures applied to these manifolds (in particular to Berezin's quantization). 91 refs.; 3 tabs

  14. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Anthony B., E-mail: acosta@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Green, Jason R., E-mail: jason.green@umb.edu [Department of Chemistry, Northwestern University, Evanston, IL 60208 (United States); Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125 (United States)

    2013-08-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.

  15. Extending the length and time scales of Gram–Schmidt Lyapunov vector computations

    International Nuclear Information System (INIS)

    Costa, Anthony B.; Green, Jason R.

    2013-01-01

    Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N 2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra

  16. Self-Regular Black Holes Quantized by means of an Analogue to Hydrogen Atoms

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2016-01-01

    Full Text Available We suggest a quantum black hole model that is based on an analogue to hydrogen atoms. A self-regular Schwarzschild-AdS black hole is investigated, where the mass density of the extreme black hole is given by the probability density of the ground state of hydrogen atoms and the mass densities of nonextreme black holes are given by the probability densities of excited states with no angular momenta. Such an analogue is inclined to adopt quantization of black hole horizons. In this way, the total mass of black holes is quantized. Furthermore, the quantum hoop conjecture and the Correspondence Principle are discussed.

  17. Networked Predictive Control for Nonlinear Systems With Arbitrary Region Quantizers.

    Science.gov (United States)

    Yang, Hongjiu; Xu, Yang; Xia, Yuanqing; Zhang, Jinhui

    2017-04-06

    In this paper, networked predictive control is investigated for planar nonlinear systems with quantization by an extended state observer (ESO). The ESO is used not only to deal with nonlinear terms but also to generate predictive states for dealing with network-induced delays. Two arbitrary region quantizers are applied to take effective values of signals in forward channel and feedback channel, respectively. Based on a "zoom" strategy, sufficient conditions are given to guarantee stabilization of the closed-loop networked control system with quantization. A simulation example is proposed to exhibit advantages and availability of the results.

  18. Hamiltonian quantization of self-dual tensor fields and a bosonic Nielsen-Ninomiya theorem

    International Nuclear Information System (INIS)

    Tang Waikeung

    1989-01-01

    The quantization of self-dual tensor fields is carried out following the procedure of Batalin and Fradkin. The (anti) self-duality constraints (either fermionic or bosonic) in the action leads to the gravitational anomaly. In the process of gauge fixing, the impossibility of the co-existence of a positive hamiltonian and covariant action is shown. A version of the Nielsen-Ninomiya theorem applies to self-dual tensor fields viz. the lattice version of the theory shows species doubling with zero net chirality. (orig.)

  19. Design of a decision support system, trained on GPU, for assisting melanoma diagnosis in dermatoscopy images

    Science.gov (United States)

    Glotsos, Dimitris; Kostopoulos, Spiros; Lalissidou, Stella; Sidiropoulos, Konstantinos; Asvestas, Pantelis; Konstandinou, Christos; Xenogiannopoulos, George; Konstantina Nikolatou, Eirini; Perakis, Konstantinos; Bouras, Thanassis; Cavouras, Dionisis

    2015-09-01

    The purpose of this study was to design a decision support system for assisting the diagnosis of melanoma in dermatoscopy images. Clinical material comprised images of 44 dysplastic (clark's nevi) and 44 malignant melanoma lesions, obtained from the dermatology database Dermnet. Initially, images were processed for hair removal and background correction using the Dull Razor algorithm. Processed images were segmented to isolate moles from surrounding background, using a combination of level sets and an automated thresholding approach. Morphological (area, size, shape) and textural features (first and second order) were calculated from each one of the segmented moles. Extracted features were fed to a pattern recognition system assembled with the Probabilistic Neural Network Classifier, which was trained to distinguish between benign and malignant cases, using the exhaustive search and the leave one out method. The system was designed on the GPU card (GeForce 580GTX) using CUDA programming framework and C++ programming language. Results showed that the designed system discriminated benign from malignant moles with 88.6% accuracy employing morphological and textural features. The proposed system could be used for analysing moles depicted on smart phone images after appropriate training with smartphone images cases. This could assist towards early detection of melanoma cases, if suspicious moles were to be captured on smartphone by patients and be transferred to the physician together with an assessment of the mole's nature.

  20. Differentiated neuroprogenitor cells incubated with human or canine adenovirus, or lentiviral vectors have distinct transcriptome profiles.

    Directory of Open Access Journals (Sweden)

    Stefania Piersanti

    Full Text Available Several studies have demonstrated the potential for vector-mediated gene transfer to the brain. Helper-dependent (HD human (HAd and canine (CAV-2 adenovirus, and VSV-G-pseudotyped self-inactivating HIV-1 vectors (LV effectively transduce human brain cells and their toxicity has been partly analysed. However, their effect on the brain homeostasis is far from fully defined, especially because of the complexity of the central nervous system (CNS. With the goal of dissecting the toxicogenomic signatures of the three vectors for human neurons, we transduced a bona fide human neuronal system with HD-HAd, HD-CAV-2 and LV. We analysed the transcriptional response of more than 47,000 transcripts using gene chips. Chip data showed that HD-CAV-2 and LV vectors activated the innate arm of the immune response, including Toll-like receptors and hyaluronan circuits. LV vector also induced an IFN response. Moreover, HD-CAV-2 and LV vectors affected DNA damage pathways--but in opposite directions--suggesting a differential response of the p53 and ATM pathways to the vector genomes. As a general response to the vectors, human neurons activated pro-survival genes and neuron morphogenesis, presumably with the goal of re-establishing homeostasis. These data are complementary to in vivo studies on brain vector toxicity and allow a better understanding of the impact of viral vectors on human neurons, and mechanistic approaches to improve the therapeutic impact of brain-directed gene transfer.

  1. Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

    Science.gov (United States)

    Cherukuru, Nihanth Wagmi

    Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as

  2. GPU Nuclear Corporation's radiation exposure management system

    International Nuclear Information System (INIS)

    Slobodien, M.J.; Bovino, A.A.; Perry, O.R.; Hildebrand, J.E.

    1984-01-01

    GPU Nuclear Corporation has developed a central main frame (IBM 3081) based radiation exposure management system which provides real time and batch transactions for three separate reactor facilities. The structure and function of the data base are discussed. The system's main features include real time on-line radiation work permit generation and personnel exposure tracking; dose accountability as a function of system and component, job type, worker classification, and work location; and personnel dosemeter (TLD and self-reading pocket dosemeters) data processing. The system also carries the qualifications of all radiation workers including RWP training, respiratory protection training, results of respirator fit tests and medical exams. A warning system is used to prevent non-qualified persons from entering controlled areas. The main frame system is interfaced with a variety of mini and micro computer systems for dosemetry, statistical and graphics applications. These are discussed. Some unique dosemetry features which are discussed include assessment of dose for up to 140 parts of the body with dose evaluations at 7,300 and 1000 mg/cm 2 for each part, tracking of MPC hours on a 7 day rolling schedule; automatic pairing of TLD and self-reading pocket dosemeter values, creation and updating of NRC Forms 4 and 5, generation of NRC required 20.407 and Reg Guide 1.16 reports. As of July 1983, over 20 remote on-line stations were in use with plans to add 20-30 more by May 1984. The system provides response times for on-line activities of 2-7 seconds and 23 1/2 hours per day ''up time''. Examples of the various on-line and batch transactions are described

  3. The nature of visual self-recognition.

    Science.gov (United States)

    Suddendorf, Thomas; Butler, David L

    2013-03-01

    Visual self-recognition is often controversially cited as an indicator of self-awareness and assessed with the mirror-mark test. Great apes and humans, unlike small apes and monkeys, have repeatedly passed mirror tests, suggesting that the underlying brain processes are homologous and evolved 14-18 million years ago. However, neuroscientific, developmental, and clinical dissociations show that the medium used for self-recognition (mirror vs photograph vs video) significantly alters behavioral and brain responses, likely due to perceptual differences among the different media and prior experience. On the basis of this evidence and evolutionary considerations, we argue that the visual self-recognition skills evident in humans and great apes are a byproduct of a general capacity to collate representations, and need not index other aspects of self-awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Automatic system for radar echoes filtering based on textural features and artificial intelligence

    Science.gov (United States)

    Hedir, Mehdia; Haddad, Boualem

    2017-10-01

    Among the very popular Artificial Intelligence (AI) techniques, Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been retained to process Ground Echoes (GE) on meteorological radar images taken from Setif (Algeria) and Bordeaux (France) with different climates and topologies. To achieve this task, AI techniques were associated with textural approaches. We used Gray Level Co-occurrence Matrix (GLCM) and Completed Local Binary Pattern (CLBP); both methods were largely used in image analysis. The obtained results show the efficiency of texture to preserve precipitations forecast on both sites with the accuracy of 98% on Bordeaux and 95% on Setif despite the AI technique used. 98% of GE are suppressed with SVM, this rate is outperforming ANN skills. CLBP approach associated to SVM eliminates 98% of GE and preserves precipitations forecast on Bordeaux site better than on Setif's, while it exhibits lower accuracy with ANN. SVM classifier is well adapted to the proposed application since the average filtering rate is 95-98% with texture and 92-93% with CLBP. These approaches allow removing Anomalous Propagations (APs) too with a better accuracy of 97.15% with texture and SVM. In fact, textural features associated to AI techniques are an efficient tool for incoherent radars to surpass spurious echoes.

  5. Image reconstruction for an electrical capacitance tomography system based on a least-squares support vector machine and a self-adaptive particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang

    2011-01-01

    The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality

  6. Simulation of Texture Evolution during Uniaxial Deformation of Commercially Pure Titanium

    Science.gov (United States)

    Bishoyi, B.; Debta, M. K.; Yadav, S. K.; Sabat, R. K.; Sahoo, S. K.

    2018-03-01

    The evolution of texture in commercially pure (CP) titanium during uniaxial tension and compression through VPSC (Visco-plastic self-consistent) simulation is reported in the present study. CP-titanium was subjected to both uniaxial tension and compression upto 35% deformation. During uniaxial tension, tensile twin of \\{10\\bar{1}2\\}\\unicode{x003C;}\\bar{1}011\\unicode{x003E;} type and compressive twin of \\{11\\bar{2}2\\}\\unicode{x003C;}11\\bar{2}\\bar{3}\\unicode{x003E;} type were observed in the samples. However, only tensile twin of \\{10\\bar{1}2\\}\\unicode{x003C;}\\bar{1}011\\unicode{x003E;} type and compressive twin of type was observed in the samples during uniaxial compression. Volume fractions of the twins were increased linearly as a function of percentage deformation during uniaxial tension. Whereas, during uniaxial compression the twinning volume fraction was increased up to 20% deformation and then decreased rapidly on further increasing the percentage deformation. During uniaxial tension, the general t-type textures were observed in the samples irrespective of the percentage deformation. The initial non-basal texture was oriented to split basal texture during uniaxial compression of the sample. VPSC formulation was used for simulating the texture development in the material. Different hardening parameters were estimated through correlating the simulated stress-strain curve with the experimental stress-strain data. It was observed that, prismatic slip \\{10\\bar{1}0\\}\\unicode{x003C;}11\\bar{2}0\\unicode{x003E;} operated as the primary deformation mode during uniaxial tension whereas basal slip \\{0001\\}\\unicode{x003C;}11\\bar{2}0\\unicode{x003E;} acquired the leading role during deformation through uniaxial compression. It was also revealed that active deformation modes were fully depending on percentage deformation, loading direction, and orientation of grains.

  7. Fourier duality as a quantization principle

    International Nuclear Information System (INIS)

    Aldrovandi, R.; Saeger, L.A.

    1996-08-01

    The Weyl-Wigner prescription for quantization on Euclidean phase spaces makes essential use of Fourier duality. The extension of this property to more general phase spaces requires the use of Kac algebras, which provide the necessary background for the implementation of Fourier duality on general locally groups. Kac algebras - and the duality they incorporate are consequently examined as candidates for a general quantization framework extending the usual formalism. Using as a test case the simplest non-trivial phase space, the half-plane, it is shown how the structures present in the complete-plane case must be modified. Traces, for example, must be replaced by their noncommutative generalizations - weights - and the correspondence embodied in the Weyl-Wigner formalism is no more complete. Provided the underlying algebraic structure is suitably adapted to each case, Fourier duality is shown to be indeed a very powerful guide to the quantization of general physical systems. (author). 30 refs

  8. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    International Nuclear Information System (INIS)

    Apostolakis, J; Brun, R; Carminati, F; Gheata, A; Novak, M; Wenzel, S; Bandieramonte, M; Bitzes, G; Canal, P; Elvira, V D; Jun, S Y; Lima, G; Licht, J C De Fine; Duhem, L; Sehgal, R; Shadura, O

    2015-01-01

    The GeantV project is focused on the R and D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results. (paper)

  9. Grid-texture mechanisms in human vision: Contrast detection of regular sparse micro-patterns requires specialist templates.

    Science.gov (United States)

    Baker, Daniel H; Meese, Tim S

    2016-07-27

    Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.

  10. Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

    Directory of Open Access Journals (Sweden)

    Cheng-Ying Chou

    Full Text Available Positron emission tomography (PET is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU, NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.

  11. Compression and channel-coding algorithms for high-definition television signals

    Science.gov (United States)

    Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.

    1990-09-01

    In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.

  12. Exploring the potential of neurophysiological measures for user-adaptive visualization

    OpenAIRE

    Tak, S.; Brouwer, A.M.; Toet, A.; Erp, J.B.F. van

    2013-01-01

    User-adaptive visualization aims to adapt visualized information to the needs and characteristics of the individual user. Current approaches deploy user personality factors, user behavior and preferences, and visual scanning behavior to achieve this goal. We argue that neurophysiological data provide valuable additional input for user-adaptive visualization systems since they contain a wealth of objective information about user characteristics. The combination of neurophysiological data with ...

  13. A gene delivery system with a human artificial chromosome vector based on migration of mesenchymal stem cells towards human glioblastoma HTB14 cells.

    Science.gov (United States)

    Kinoshita, Yusuke; Kamitani, Hideki; Mamun, Mahabub Hasan; Wasita, Brian; Kazuki, Yasuhiro; Hiratsuka, Masaharu; Oshimura, Mitsuo; Watanabe, Takashi

    2010-05-01

    Mesenchymal stem cells (MSCs) have been expected to become useful gene delivery vehicles against human malignant gliomas when coupled with an appropriate vector system, because they migrate towards the lesion. Human artificial chromosomes (HACs) are non-integrating vectors with several advantages for gene therapy, namely, no limitations on the size and number of genes that can be inserted. We investigated the migration of human immortalized MSCs bearing a HAC vector containing the herpes simplex virus thymidine kinase gene (HAC-tk-hiMSCs) towards malignant gliomas in vivo. Red fluorescence protein-labeled human glioblastoma HTB14 cells were implanted into a subcortical region in nude mice. Four days later, green fluorescence protein-labeled HAC-tk-hiMSCs were injected into a contralateral subcortical region (the HTB14/HAC-tk-hiMSC injection model). Tropism to the glioma mass and the route of migration were visualized by fluorescence microscopy and immunohistochemical staining. HAC-tk-hiMSCs began to migrate toward the HTB14 glioma area via the corpus callosum on day 4, and gathered around the HTB14 glioma mass on day 7. To test whether the delivered gene could effectively treat glioblastoma in vivo, HTB14/HAC-tk-hiMSC injected mice were treated with ganciclovir (GCV) or PBS. The HTB14 glioma mass was significantly reduced by GCV treatment in mice injected with HAC-tk-hiMSCs. It was confirmed that gene delivery by our HAC-hiMSC system was effective after migration of MSCs to the glioma mass in vivo. Therefore, MSCs containing HACs carrying an anticancer gene or genes may provide a new tool for the treatment of malignant gliomas and possibly of other tumor types.

  14. Semantic attributes based texture generation

    Science.gov (United States)

    Chi, Huifang; Gan, Yanhai; Qi, Lin; Dong, Junyu; Madessa, Amanuel Hirpa

    2018-04-01

    Semantic attributes are commonly used for texture description. They can be used to describe the information of a texture, such as patterns, textons, distributions, brightness, and so on. Generally speaking, semantic attributes are more concrete descriptors than perceptual features. Therefore, it is practical to generate texture images from semantic attributes. In this paper, we propose to generate high-quality texture images from semantic attributes. Over the last two decades, several works have been done on texture synthesis and generation. Most of them focusing on example-based texture synthesis and procedural texture generation. Semantic attributes based texture generation still deserves more devotion. Gan et al. proposed a useful joint model for perception driven texture generation. However, perceptual features are nonobjective spatial statistics used by humans to distinguish different textures in pre-attentive situations. To give more describing information about texture appearance, semantic attributes which are more in line with human description habits are desired. In this paper, we use sigmoid cross entropy loss in an auxiliary model to provide enough information for a generator. Consequently, the discriminator is released from the relatively intractable mission of figuring out the joint distribution of condition vectors and samples. To demonstrate the validity of our method, we compare our method to Gan et al.'s method on generating textures by designing experiments on PTD and DTD. All experimental results show that our model can generate textures from semantic attributes.

  15. Implementation of Texture Based Image Retrieval Using M-band Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    LiaoYa-li; Yangyan; CaoYang

    2003-01-01

    Wavelet transform has attracted attention because it is a very useful tool for signal analyzing. As a fundamental characteristic of an image, texture traits play an important role in the human vision system for recognition and interpretation of images. The paper presents an approach to implement texture-based image retrieval using M-band wavelet transform. Firstly the traditional 2-band wavelet is extended to M-band wavelet transform. Then the wavelet moments are computed by M-band wavelet coefficients in the wavelet domain. The set of wavelet moments forms the feature vector related to the texture distribution of each wavelet images. The distances between the feature vectors describe the similarities of different images. The experimental result shows that the M-band wavelet moment features of the images are effective for image indexing.The retrieval method has lower computational complexity, yet it is capable of giving better retrieval performance for a given medical image database.

  16. Scale-adaptive Local Patches for Robust Visual Object Tracking

    Directory of Open Access Journals (Sweden)

    Kang Sun

    2014-04-01

    Full Text Available This paper discusses the problem of robustly tracking objects which undergo rapid and dramatic scale changes. To remove the weakness of global appearance models, we present a novel scheme that combines object’s global and local appearance features. The local feature is a set of local patches that geometrically constrain the changes in the target’s appearance. In order to adapt to the object’s geometric deformation, the local patches could be removed and added online. The addition of these patches is constrained by the global features such as color, texture and motion. The global visual features are updated via the stable local patches during tracking. To deal with scale changes, we adapt the scale of patches in addition to adapting the object bound box. We evaluate our method by comparing it to several state-of-the-art trackers on publicly available datasets. The experimental results on challenging sequences confirm that, by using this scale-adaptive local patches and global properties, our tracker outperforms the related trackers in many cases by having smaller failure rate as well as better accuracy.

  17. How General-Purpose can a GPU be?

    Directory of Open Access Journals (Sweden)

    Philip Machanick

    2015-12-01

    Full Text Available The use of graphics processing units (GPUs in general-purpose computation (GPGPU is a growing field. GPU instruction sets, while implementing a graphics pipeline, draw from a range of single instruction multiple datastream (SIMD architectures characteristic of the heyday of supercomputers. Yet only one of these SIMD instruction sets has been of application on a wide enough range of problems to survive the era when the full range of supercomputer design variants was being explored: vector instructions. This paper proposes a reconceptualization of the GPU as a multicore design with minimal exotic modes of parallelism so as to make GPGPU truly general.

  18. Time series classification using k-Nearest neighbours, Multilayer Perceptron and Learning Vector Quantization algorithms

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2012-01-01

    Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.

  19. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    Science.gov (United States)

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  20. Prototype-based models in machine learning.

    Science.gov (United States)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. © 2016 Wiley Periodicals, Inc.

  1. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    Science.gov (United States)

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  2. Bandwidth compression of the digitized HDTV images for transmission via satellites

    Science.gov (United States)

    Al-Asmari, A. KH.; Kwatra, S. C.

    1992-01-01

    This paper investigates a subband coding scheme to reduce the transmission bandwidth of the digitized HDTV images. The HDTV signals are decomposed into seven bands. Each band is then independently encoded. The based band is DPCM encoded and the high bands are encoded by using nonuniform Laplacian quantizers with a dead zone. By selecting the dead zone on the basis of energy in the high bands an acceptable image quality is achieved at an average of 45 Mbits/sec (Mbps) rate. This rate is comparable to some very hardware intensive schemes of transform compression or vector quantization proposed in the literature. The subband coding scheme used in this study is considered to be of medium complexity. The 45 Mbps rate is suitable for transmission of HDTV signals via satellites.

  3. Alpharetroviral self-inactivating vectors produced by a superinfection-resistant stable packaging cell line allow genetic modification of primary human T lymphocytes.

    Science.gov (United States)

    Labenski, Verena; Suerth, Julia D; Barczak, Elke; Heckl, Dirk; Levy, Camille; Bernadin, Ornellie; Charpentier, Emmanuelle; Williams, David A; Fehse, Boris; Verhoeyen, Els; Schambach, Axel

    2016-08-01

    Primary human T lymphocytes represent an important cell population for adoptive immunotherapies, including chimeric-antigen and T-cell receptor applications, as they have the capability to eliminate non-self, virus-infected and tumor cells. Given the increasing numbers of clinical immunotherapy applications, the development of an optimal vector platform for genetic T lymphocyte engineering, which allows cost-effective high-quality vector productions, remains a critical goal. Alpharetroviral self-inactivating vectors (ARV) have several advantages compared to other vector platforms, including a more random genomic integration pattern and reduced likelihood for inducing aberrant splicing of integrated proviruses. We developed an ARV platform for the transduction of primary human T lymphocytes. We demonstrated functional transgene transfer using the clinically relevant herpes-simplex-virus thymidine kinase variant TK.007. Proof-of-concept of alpharetroviral-mediated T-lymphocyte engineering was shown in vitro and in a humanized transplantation model in vivo. Furthermore, we established a stable, human alpharetroviral packaging cell line in which we deleted the entry receptor (SLC1A5) for RD114/TR-pseudotyped ARVs to prevent superinfection and enhance genomic integrity of the packaging cell line and viral particles. We showed that superinfection can be entirely prevented, while maintaining high recombinant virus titers. Taken together, this resulted in an improved production platform representing an economic strategy for translating the promising features of ARVs for therapeutic T-lymphocyte engineering. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Self-compression of spatially limited laser pulses in a system of coupled light-guides

    Science.gov (United States)

    Balakin, A. A.; Litvak, A. G.; Mironov, V. A.; Skobelev, S. A.

    2018-04-01

    The self-action features of wave packets propagating in a 2D system of equidistantly arranged fibers are studied analytically and numerically on the basis of the discrete nonlinear Schrödinger equation. Self-consistent equations for the characteristic scales of a Gaussian wave packet are derived on the basis of the variational approach, which are proved numerically for powers P beams become filamented, and their amplitude is limited due to the nonlinear breaking of the interaction between neighboring light-guides. This makes it impossible to collect a powerful wave beam in a single light-guide. Variational analysis shows the possibility of the adiabatic self-compression of soliton-like laser pulses in the process of 3D self-focusing on the central light-guide. However, further increase of the field amplitude during self-compression leads to the development of longitudinal modulation instability and the formation of a set of light bullets in the central fiber. In the regime of hollow wave beams, filamentation instability becomes predominant. As a result, it becomes possible to form a set of light bullets in optical fibers located on the ring.

  5. Motion Vector Sharing and Bitrate Allocation for 3D Video-Plus-Depth Coding

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-08-01

    Full Text Available The video-plus-depth data representation uses a regular texture video enriched with the so-called depth map, providing the depth distance for each pixel. The compression efficiency is usually higher for smooth, gray level data representing the depth map than for classical video texture. However, improvements of the coding efficiency are still possible, taking into account the fact that the video and the depth map sequences are strongly correlated. Classically, the correlation between the texture motion vectors and the depth map motion vectors is not exploited in the coding process. The aim of this paper is to reduce the amount of information for describing the motion of the texture video and of the depth map sequences by sharing one common motion vector field. Furthermore, in the literature, the bitrate control scheme generally fixes for the depth map sequence a percentage of 20% of the texture stream bitrate. However, this fixed percentage can affect the depth coding efficiency, and it should also depend on the content of each sequence. We propose a new bitrate allocation strategy between the texture and its associated per-pixel depth information. We provide comparative analysis to measure the quality of the resulting 3D+t sequences.

  6. Quantization of non-Hamiltonian physical systems

    International Nuclear Information System (INIS)

    Bolivar, A.O.

    1998-09-01

    We propose a general method of quantization of non-Hamiltonian physical systems. Applying it, for example, to a dissipative system coupled to a thermal reservoir described by the Fokker-Planck equation, we are able to obtain the Caldeira-Leggett master equation, the non-linear Schroedinger-Langevin equation and Caldirola-Kanai equation (with an additional term), as particular cases. (author)

  7. Texture evolution in upset-forged P/M and wrought tantalum: Experimentation and modeling

    International Nuclear Information System (INIS)

    Bingert, J.F.; Desch, P.B.; Bingert, S.R.; Maudlin, P.J.; Tome, C.N.

    1997-11-01

    Preferred orientations in polycrystalline materials can significantly affect their physical and mechanical response through the retention of anisotropic properties inherent to the single crystal. In this study the texture evolution in upset-forged PIM and wrought tantalum was measured as a function of initial texture, compressive strain, and relative position in the pressing. A / duplex fiber texture parallel to the compression axis was generally observed, with varying degrees of a radial component evident in the wrought material. The development of deformation textures derives from restricted crystallographic slip conditions that generate lattice rotations, and these grain reorientations can be modeled as a function of the prescribed deformation gradient. Texture development was simulated for equivalent deformations using both a modified Taylor approach and a viscoplastic self-consistent (VPSC) model. A comparison between the predicted evolution and experimental results shows a good correlation with the texture components, but an overly sharp prediction at large strains from both the Taylor and VPSC models

  8. Encryption of Stereo Images after Compression by Advanced Encryption Standard (AES

    Directory of Open Access Journals (Sweden)

    Marwah k Hussien

    2018-04-01

    Full Text Available New partial encryption schemes are proposed, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption applied after application of image compression algorithm. Only 0.0244%-25% of the original data isencrypted for two pairs of dif-ferent grayscale imageswiththe size (256 ´ 256 pixels. As a result, we see a significant reduction of time in the stage of encryption and decryption. In the compression step, the Orthogonal Search Algorithm (OSA for motion estimation (the dif-ferent between stereo images is used. The resulting disparity vector and the remaining image were compressed by Discrete Cosine Transform (DCT, Quantization and arithmetic encoding. The image compressed was encrypted by Advanced Encryption Standard (AES. The images were then decoded and were compared with the original images. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR, Com-pression Ratio (CR and processing time. The proposed partial encryption schemes are fast, se-cure and do not reduce the compression performance of the underlying selected compression methods

  9. Electromagnetic Computation and Visualization of Transmission Particle Model and Its Simulation Based on GPU

    Directory of Open Access Journals (Sweden)

    Yingnian Wu

    2014-01-01

    Full Text Available Electromagnetic calculation plays an important role in both military and civic fields. Some methods and models proposed for calculation of electromagnetic wave propagation in a large range bring heavy burden in CPU computation and also require huge amount of memory. Using the GPU to accelerate computation and visualization can reduce the computational burden on the CPU. Based on forward ray-tracing method, a transmission particle model (TPM for calculating electromagnetic field is presented to combine the particle method. The movement of a particle obeys the principle of the propagation of electromagnetic wave, and then the particle distribution density in space reflects the electromagnetic distribution status. The algorithm with particle transmission, movement, reflection, and diffraction is described in detail. Since the particles in TPM are completely independent, it is very suitable for the parallel computing based on GPU. Deduction verification of TPM with the electric dipole antenna as the transmission source is conducted to prove that the particle movement itself represents the variation of electromagnetic field intensity caused by diffusion. Finally, the simulation comparisons are made against the forward and backward ray-tracing methods. The simulation results verified the effectiveness of the proposed method.

  10. Visualization tool for human-machine interface designers

    Science.gov (United States)

    Prevost, Michael P.; Banda, Carolyn P.

    1991-06-01

    As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.

  11. On the quantization of the massless Bateman system

    Science.gov (United States)

    Takahashi, K.

    2018-03-01

    The so-called Bateman system for the damped harmonic oscillator is reduced to a genuine dual dissipation system (DDS) by setting the mass to zero. We explore herein the condition under which the canonical quantization of the DDS is consistently performed. The roles of the observable and auxiliary coordinates are discriminated. The results show that the complete and orthogonal Fock space of states can be constructed on the stable vacuum if an anti-Hermite representation of the canonical Hamiltonian is adopted. The amplitude of the one-particle wavefunction is consistent with the classical solution. The fields can be quantized as bosonic or fermionic. For bosonic systems, the quantum fluctuation of the field is directly associated with the dissipation rate.

  12. BRST stochastic quantization

    International Nuclear Information System (INIS)

    Hueffel, H.

    1990-01-01

    After a brief review of the BRST formalism and of the Parisi-Wu stochastic quantization method we introduce the BRST stochastic quantization scheme. It allows the second quantization of constrained Hamiltonian systems in a manifestly gauge symmetry preserving way. The examples of the relativistic particle, the spinning particle and the bosonic string are worked out in detail. The paper is closed by a discussion on the interacting field theory associated to the relativistic point particle system. 58 refs. (Author)

  13. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    Science.gov (United States)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  14. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R [Livermore, CA

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  15. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Rapid and Parallel Adaptive Evolution of the Visual System of Neotropical Midas Cichlid Fishes.

    Science.gov (United States)

    Torres-Dowdall, Julián; Pierotti, Michele E R; Härer, Andreas; Karagic, Nidal; Woltering, Joost M; Henning, Frederico; Elmer, Kathryn R; Meyer, Axel

    2017-10-01

    Midas cichlid fish are a Central American species flock containing 13 described species that has been dated to only a few thousand years old, a historical timescale infrequently associated with speciation. Their radiation involved the colonization of several clear water crater lakes from two turbid great lakes. Therefore, Midas cichlids have been subjected to widely varying photic conditions during their radiation. Being a primary signal relay for information from the environment to the organism, the visual system is under continuing selective pressure and a prime organ system for accumulating adaptive changes during speciation, particularly in the case of dramatic shifts in photic conditions. Here, we characterize the full visual system of Midas cichlids at organismal and genetic levels, to determine what types of adaptive changes evolved within the short time span of their radiation. We show that Midas cichlids have a diverse visual system with unexpectedly high intra- and interspecific variation in color vision sensitivity and lens transmittance. Midas cichlid populations in the clear crater lakes have convergently evolved visual sensitivities shifted toward shorter wavelengths compared with the ancestral populations from the turbid great lakes. This divergence in sensitivity is driven by changes in chromophore usage, differential opsin expression, opsin coexpression, and to a lesser degree by opsin coding sequence variation. The visual system of Midas cichlids has the evolutionary capacity to rapidly integrate multiple adaptations to changing light environments. Our data may indicate that, in early stages of divergence, changes in opsin regulation could precede changes in opsin coding sequence evolution. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. On the quantization of a nonlinear oscillator with quasi-harmonic behaviour

    International Nuclear Information System (INIS)

    Ranada, M.F.; Carinena, J.F.; Satander, M.

    2006-01-01

    Full text: (author)The quantum version of a non-linear oscillator, depending of a parameter λ, is studied. This λ-dependent system can be considered deformation of the harmonic oscillator in the sense that for λ→0 all the characteristics of the linear oscillator are recovered. This is a problem of quantization of a system with position-dependent mass and with a λ-dependent nonpolynominal rational potential. The quantization problem is solved using existence of a Killing vector, the λ-dependent Schroedinger equation is exactly solved and λ-dependent eigenenergies and eigenfunctions are obtained. The λ-dependent wave functions appear as related with a family of orthogonal polynomials that can be considered as deformations of the standard Hermite polynomials. In the second part, it is proved the superintegrability of the two-dimensional system

  18. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  19. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  20. M-HinTS: Mimicking Humans in Texture Sorting

    NARCIS (Netherlands)

    van den Broek, Egon; Rogowitz, Bernice E.; van Rikxoort, Eva M.; Pappas, Thrasyvoulos N.; Kok, Thijs; Daly, Scott J.; Schouten, Theo E.

    2006-01-01

    Various texture analysis algorithms have been developed the last decades. However, no computational model has arisen that mimics human texture perception adequately. In 2000, Payne, Hepplewhite, and Stoneham and in 2005, Van Rikxoort, Van den Broek, and Schouten achieved mappings between humans and

  1. Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization

    Science.gov (United States)

    Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.

    2017-01-01

    Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.

  2. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    Science.gov (United States)

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  3. Renormalized semiclassical quantization for rescalable Hamiltonians

    International Nuclear Information System (INIS)

    Takahashi, Satoshi; Takatsuka, Kazuo

    2004-01-01

    A renormalized semiclassical quantization method for rescalable Hamiltonians is proposed. A classical Hamilton system having a potential function that consists of homogeneous polynomials like the Coulombic potential can have a scale invariance in its extended phase space (phase space plus time). Consequently, infinitely many copies of a single trajectory constitute a one-parameter family that is characterized in terms of a scaling factor. This scaling invariance in classical dynamics is lost in quantum mechanics due to the presence of the Planck constant. It is shown that in a system whose classical motions have a self-similarity in the above sense, classical trajectories adopted in the semiclassical scheme interact with infinitely many copies of their own that are reproduced by the relevant scaling procedure, thereby undergoing quantum interference among themselves to produce a quantized spectrum

  4. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Chan, K.K.; Ishimitsu, Y.; Lo, S.C.; Huang, H.K.

    1987-01-01

    The full-frame bit allocation algorithm for radiological image compression developed in the authors' laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. Their design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Their design allows for a maximum image size of 2K x 2K

  5. Fully 3D GPU PET reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L., E-mail: joaquin@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Departmento Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Cal-Gonzalez, J. [Grupo de Fisica Nuclear, Departmento Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Vaquero, J.J. [Departmento de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Desco, M. [Departmento de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Departmento Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Fully 3D iterative tomographic image reconstruction is computationally very demanding. Graphics Processing Unit (GPU) has been proposed for many years as potential accelerators in complex scientific problems, but it has not been used until the recent advances in the programmability of GPUs that the best available reconstruction codes have started to be implemented to be run on GPUs. This work presents a GPU-based fully 3D PET iterative reconstruction software. This new code may reconstruct sinogram data from several commercially available PET scanners. The most important and time-consuming parts of the code, the forward and backward projection operations, are based on an accurate model of the scanner obtained with the Monte Carlo code PeneloPET and they have been massively parallelized on the GPU. For the PET scanners considered, the GPU-based code is more than 70 times faster than a similar code running on a single core of a fast CPU, obtaining in both cases the same images. The code has been designed to be easily adapted to reconstruct sinograms from any other PET scanner, including scanner prototypes.

  6. Fully 3D GPU PET reconstruction

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Cal-Gonzalez, J.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2011-01-01

    Fully 3D iterative tomographic image reconstruction is computationally very demanding. Graphics Processing Unit (GPU) has been proposed for many years as potential accelerators in complex scientific problems, but it has not been used until the recent advances in the programmability of GPUs that the best available reconstruction codes have started to be implemented to be run on GPUs. This work presents a GPU-based fully 3D PET iterative reconstruction software. This new code may reconstruct sinogram data from several commercially available PET scanners. The most important and time-consuming parts of the code, the forward and backward projection operations, are based on an accurate model of the scanner obtained with the Monte Carlo code PeneloPET and they have been massively parallelized on the GPU. For the PET scanners considered, the GPU-based code is more than 70 times faster than a similar code running on a single core of a fast CPU, obtaining in both cases the same images. The code has been designed to be easily adapted to reconstruct sinograms from any other PET scanner, including scanner prototypes.

  7. Compact multimode fiber beam-shaping system based on GPU accelerated digital holography.

    Science.gov (United States)

    Plöschner, Martin; Čižmár, Tomáš

    2015-01-15

    Real-time, on-demand, beam shaping at the end of the multimode fiber has recently been made possible by exploiting the computational power of rapidly evolving graphics processing unit (GPU) technology [Opt. Express 22, 2933 (2014)]. However, the current state-of-the-art system requires the presence of an acousto-optic deflector (AOD) to produce images at the end of the fiber without interference effects between neighboring output points. Here, we present a system free from the AOD complexity where we achieve the removal of the undesired interference effects computationally using GPU implemented Gerchberg-Saxton and Yang-Gu algorithms. The GPU implementation is two orders of magnitude faster than the CPU implementation which allows video-rate image control at the distal end of the fiber virtually free of interference effects.

  8. Self-Adaptive On-Chip System Based on Cross-Layer Adaptation Approach

    Directory of Open Access Journals (Sweden)

    Kais Loukil

    2013-01-01

    Full Text Available The emergence of mobile and battery operated multimedia systems and the diversity of supported applications mount new challenges in terms of design efficiency of these systems which must provide a maximum application quality of service (QoS in the presence of a dynamically varying environment. These optimization problems cannot be entirely solved at design time and some efficiency gains can be obtained at run-time by means of self-adaptivity. In this paper, we propose a new cross-layer hardware (HW/software (SW adaptation solution for embedded mobile systems. It supports application QoS under real-time and lifetime constraints via coordinated adaptation in the hardware, operating system (OS, and application layers. Our method relies on an original middleware solution used on both global and local managers. The global manager (GM handles large, long-term variations whereas the local manager (LM is used to guarantee real-time constraints. The GM acts in three layers whereas the LM acts in application and OS layers only. The main role of GM is to select the best configuration for each application to meet the constraints of the system and respect the preferences of the user. The proposed approach has been applied to a 3D graphics application and successfully implemented on an Altera FPGA.

  9. Visual and olfactory associative learning in the malaria vector Anopheles gambiae sensu stricto

    Directory of Open Access Journals (Sweden)

    Chilaka Nora

    2012-01-01

    Full Text Available Abstract Background Memory and learning are critical aspects of the ecology of insect vectors of human pathogens because of their potential effects on contacts between vectors and their hosts. Despite this epidemiological importance, there have been only a limited number of studies investigating associative learning in insect vector species and none on Anopheline mosquitoes. Methods A simple behavioural assays was developed to study visual and olfactory associative learning in Anopheles gambiae, the main vector of malaria in Africa. Two contrasted membrane qualities or levels of blood palatability were used as reinforcing stimuli for bi-directional conditioning during blood feeding. Results Under such experimental conditions An. gambiae females learned very rapidly to associate visual (chequered and white patterns and olfactory cues (presence and absence of cheese or Citronella smell with the reinforcing stimuli (bloodmeal quality and remembered the association for up to three days. Associative learning significantly increased with the strength of the conditioning stimuli used. Importantly, learning sometimes occurred faster when a positive reinforcing stimulus (palatable blood was associated with an innately preferred cue (such as a darker visual pattern. However, the use of too attractive a cue (e.g. Shropshire cheese smell was counter-productive and decreased learning success. Conclusions The results address an important knowledge gap in mosquito ecology and emphasize the role of associative memory for An. gambiae's host finding and blood-feeding behaviour with important potential implications for vector control.

  10. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  11. A review of human factors challenges of complex adaptive systems: discovering and understanding chaos in human performance.

    Science.gov (United States)

    Karwowski, Waldemar

    2012-12-01

    In this paper, the author explores a need for a greater understanding of the true nature of human-system interactions from the perspective of the theory of complex adaptive systems, including the essence of complexity, emergent properties of system behavior, nonlinear systems dynamics, and deterministic chaos. Human performance, more often than not, constitutes complex adaptive phenomena with emergent properties that exhibit nonlinear dynamical (chaotic) behaviors. The complexity challenges in the design and management of contemporary work systems, including service systems, are explored. Examples of selected applications of the concepts of nonlinear dynamics to the study of human physical performance are provided. Understanding and applications of the concepts of theory of complex adaptive and dynamical systems should significantly improve the effectiveness of human-centered design efforts of a large system of systems. Performance of many contemporary work systems and environments may be sensitive to the initial conditions and may exhibit dynamic nonlinear properties and chaotic system behaviors. Human-centered design of emergent human-system interactions requires application of the theories of nonlinear dynamics and complex adaptive system. The success of future human-systems integration efforts requires the fusion of paradigms, knowledge, design principles, and methodologies of human factors and ergonomics with those of the science of complex adaptive systems as well as modern systems engineering.

  12. A GPU-paralleled implementation of an enhanced face recognition algorithm

    Science.gov (United States)

    Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo

    2013-03-01

    Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.

  13. On the data compression at filmless readout of the streamer chamber information

    International Nuclear Information System (INIS)

    Bajla, I.; Ososkov, G.A.; Prikhod'ko, V.I.

    1980-01-01

    It is supposed that the system of filmless detecting and processing the visual information from ''RISK'' streamer chamber will comprise the effective on-line data compression algorithm. The role of the basic methodological principles of chamber image film processing in Righ Energy Physics for building up such system is analysed. On the basis of this analysis the main requirements are formulated that have to be fulfilled by the compression algorithm. The most important requirement consists in securing the possibility of the input data reprocessing, if problems in the off-line recognition occur. Using a vector system representation of primary data, the on-line data compression philosophy is proposed that embodies the following three principles: universality, parallelism and input data reconstructibility. Excluding of the recognition procedure from the on-line compression algorithm causes the compression factor reduction. The hierarchic structure of the compression algorithm consisting of (1) sorting, (2) filtering, (3) compression for an additional increasing of the compression ratio is proposed

  14. Systematic approach in optimizing numerical memory-bound kernels on GPU

    KAUST Repository

    Abdelfattah, Ahmad; Keyes, David E.; Ltaief, Hatem

    2013-01-01

    memory-bound DLA kernels on GPUs, by taking advantage of the underlying device's architecture (e.g., high throughput). This methodology proved to outperform existing state-of-the-art GPU implementations for the symmetric matrix-vector multiplication (SYMV

  15. Better Faster Noise with the GPU

    DEFF Research Database (Denmark)

    Wyvill, Geoff; Frisvad, Jeppe Revall

    Filtered noise [Perlin 1985] has, for twenty years, been a fundamental tool for creating functional texture and it has many other applications; for example, animating water waves or the motion of grass waving in the wind. Perlin noise suffers from a number of defects and there have been many atte...... attempts to create better or faster noise but Perlin’s ‘Gradient Noise’ has consistently proved to be the best compromise between speed and quality. Our objective was to create a better noise cheaply by use of the GPU....

  16. On the quantization of systems with anticommuting variables

    International Nuclear Information System (INIS)

    Casalbuoni, R.

    1976-01-01

    The paper considers the pseudomechanics, that is the mechanics of a system described by ordinary canonical variables and by Grassmann variables. The canonical formalism is studied and in particular the Poisson brackets are defined. It is shown that the algebra of the Poisson brackets is graded Lie algebra. Using this fact as a hint for quantization it is shown that the corresponding quantized theory is the ordinary quantum theory with Fermi operators. It follows that the classical limit of the quantum theory is, in general, the pseudo-mechanics

  17. Statistical motion vector analysis for object tracking in compressed video streams

    Science.gov (United States)

    Leny, Marc; Prêteux, Françoise; Nicholson, Didier

    2008-02-01

    Compressed video is the digital raw material provided by video-surveillance systems and used for archiving and indexing purposes. Multimedia standards have therefore a direct impact on such systems. If MPEG-2 used to be the coding standard, MPEG-4 (part 2) has now replaced it in most installations, and MPEG-4 AVC/H.264 solutions are now being released. Finely analysing the complex and rich MPEG-4 streams is a challenging issue addressed in that paper. The system we designed is based on five modules: low-resolution decoder, motion estimation generator, object motion filtering, low-resolution object segmentation, and cooperative decision. Our contributions refer to as the statistical analysis of the spatial distribution of the motion vectors, the computation of DCT-based confidence maps, the automatic motion activity detection in the compressed file and a rough indexation by dedicated descriptors. The robustness and accuracy of the system are evaluated on a large corpus (hundreds of hours of in-and outdoor videos with pedestrians and vehicles). The objective benchmarking of the performances is achieved with respect to five metrics allowing to estimate the error part due to each module and for different implementations. This evaluation establishes that our system analyses up to 200 frames (720x288) per second (2.66 GHz CPU).

  18. High-Resolution Remotely Sensed Small Target Detection by Imitating Fly Visual Perception Mechanism

    Directory of Open Access Journals (Sweden)

    Fengchen Huang

    2012-01-01

    Full Text Available The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  19. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    Science.gov (United States)

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  20. Cooperative Media Streaming Using Adaptive Network Compression

    DEFF Research Database (Denmark)

    Møller, Janus Heide; Sørensen, Jesper Hemming; Krigslund, Rasmus

    2008-01-01

    as an adaptive hybrid between LC and MDC. In order to facilitate the use of MDC-CC, a new overlay network approach is proposed, using tree of meshes. A control system for managing description distribution and compression in a small mesh is implemented in the discrete event simulator NS-2. The two traditional...... approaches, MDC and LC, are used as references for the performance evaluation of the proposed scheme. The system is simulated in a heterogeneous network environment, where packet errors are introduced. Moreover, a test is performed at different network loads. Performance gain is shown over both LC and MDC....

  1. Multi-directional self-ion irradiation of thin gold films: A new strategy for achieving full texture control

    International Nuclear Information System (INIS)

    Seita, Matteo; Muff, Daniel; Spolenak, Ralph

    2011-01-01

    Highlights: → Multi-directional self-ion bombardment of Au films. → Extensive selective grain growth leads to single crystal-like films. → Texture rotation is prevented by the multi-directional irradiation process. → Texture rotation rate depends on the film initial defect density. - Abstract: Post-deposition ion bombardment can be employed to convert polycrystalline films into single crystals through a process of selective grain growth. Here we report a new technique that enables selective grain growth in self-ion bombarded gold films - a system in which the formation of large single crystal domains was prevented by the occurrence of ion-induced texture rotation. Our findings suggest that the extent of the texture rotation is a function of the ion fluence and the film initial microstructure.

  2. PERFORMANCE ANALYSIS OF SET PARTITIONING IN HIERARCHICAL TREES (SPIHT ALGORITHM FOR A FAMILY OF WAVELETS USED IN COLOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    A. Sreenivasa Murthy

    2014-11-01

    Full Text Available With the spurt in the amount of data (Image, video, audio, speech, & text available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR, and subjectively, using perceived image quality (human visual perception, HVP for short. The resulting reduction in the image size is quantified by compression ratio (CR.

  3. The quantization of gravity

    CERN Document Server

    Gerhardt, Claus

    2018-01-01

    A unified quantum theory incorporating the four fundamental forces of nature is one of the major open problems in physics. The Standard Model combines electro-magnetism, the strong force and the weak force, but ignores gravity. The quantization of gravity is therefore a necessary first step to achieve a unified quantum theory. In this monograph a canonical quantization of gravity has been achieved by quantizing a geometric evolution equation resulting in a gravitational wave equation in a globally hyperbolic spacetime. Applying the technique of separation of variables we obtain eigenvalue problems for temporal and spatial self-adjoint operators where the temporal operator has a pure point spectrum with eigenvalues $\\lambda_i$ and related eigenfunctions, while, for the spatial operator, it is possible to find corresponding eigendistributions for each of the eigenvalues $\\lambda_i$, if the Cauchy hypersurface is asymptotically Euclidean or if the quantized spacetime is a black hole with a negative cosmological ...

  4. Quantization of dissipative systems - some irresponsible speculations

    International Nuclear Information System (INIS)

    Kochan, Denis

    2007-01-01

    The Newton-Lagrange equations of motion represent the fundamental law of mechanics. Their traditional Lagrangian and/or Hamiltonian precursors when available are essential in the context of quantization. However, there are situations that lack Lagrangian and/or Hamiltonian settings. This paper discusses classical and quantal dynamics of such systems and presents some irresponsible speculations by introducing a certain canonical two-form Ω. By its construction Ω embodies kinetic energy and forces acting within the system (not their potential). A new type of variational principle is introduced, where variation is performed over a set of 'umbilical surfaces' instead of system histories. It provides correct Newton-Lagrange equations of motion and something more. The quantization is inspired by the Feynman functional integral approach. The quintessence is to rearrange path integral into an ''umbilical world-sheet'' integral in accordance with the proposed variational principle. In the case of potential-generated forces, the new approach reduces to the standard quantum mechanics

  5. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    Science.gov (United States)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  6. Super Normal Vector for Human Activity Recognition with Depth Cameras.

    Science.gov (United States)

    Yang, Xiaodong; Tian, YingLi

    2017-05-01

    The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.

  7. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    Science.gov (United States)

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  8. On the quantization of sectorially Hamiltonian dissipative systems

    Energy Technology Data Exchange (ETDEWEB)

    Castagnino, M. [Instituto de Fisica de Rosario, 2000 Rosario (Argentina); Instituto de Astronomia y Fisica del Espacio, Casilla de Correos 67, Sucursal 28, 1428 Buenos Aires (Argentina); Gadella, M. [Instituto de Fisica de Rosario, 2000 Rosario (Argentina); Departamento de Fisica Teorica, Atomica y Optica, Facultad de Ciencias, Universidad de Valladolid, 47005 Valladolid (Spain)], E-mail: manuelgadella@yahoo.com.ar; Lara, L.P. [Instituto de Fisica de Rosario, 2000 Rosario (Argentina); Facultad Regional Rosario, UTN, 2000 Rosario (Argentina)

    2009-10-15

    We present a theoretical discussion showing that, although some dissipative systems may have a sectorial Hamiltonian description, this description does not allow for canonical quantization. However, a quantum Liouville counterpart of these systems is possible, although it is not unique.

  9. On the quantization of sectorially Hamiltonian dissipative systems

    International Nuclear Information System (INIS)

    Castagnino, M.; Gadella, M.; Lara, L.P.

    2009-01-01

    We present a theoretical discussion showing that, although some dissipative systems may have a sectorial Hamiltonian description, this description does not allow for canonical quantization. However, a quantum Liouville counterpart of these systems is possible, although it is not unique.

  10. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  11. Parallelization and checkpointing of GPU applications through program transformation

    Energy Technology Data Exchange (ETDEWEB)

    Solano-Quinde, Lizandro Damian [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solve the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and

  12. A dynamical system approach to texel identification in regular textures

    NARCIS (Netherlands)

    Grigorescu, S.E.; Petkov, N.; Loncaric, S; Neri, A; Babic, H

    2003-01-01

    We propose a texture analysis method based on Rényi’s entropies. The method aims at identifying texels in regular textures by searching for the smallest window through which the minimum number of different visual patterns is observed when moving the window over a given texture. The experimental

  13. Boiler: lossy compression of RNA-seq alignments using coverage vectors.

    Science.gov (United States)

    Pritt, Jacob; Langmead, Ben

    2016-09-19

    We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Enhanced quantization particles, fields and gravity

    CERN Document Server

    Klauder, John R

    2015-01-01

    This pioneering book addresses the question: Are the standard procedures of canonical quantization fully satisfactory, or is there more to learn about assigning a proper quantum system to a given classical system? As shown in this book, the answer to this question is: The standard procedures of canonical quantization are not the whole story! This book offers alternative quantization procedures that complete the story of quantization. The initial chapters are designed to present the new procedures in a clear and simple manner for general readers. As is necessary, systems that exhibit acceptable results with conventional quantization lead to the same results when the new procedures are used for them. However, later chapters examine selected models that lead to unacceptable results when quantized conventionally. Fortunately, these same models lead to acceptable results when the new quantization procedures are used.

  15. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  16. Towards Static Analysis of Policy-Based Self-adaptive Computing Systems

    DEFF Research Database (Denmark)

    Margheri, Andrea; Nielson, Hanne Riis; Nielson, Flemming

    2016-01-01

    For supporting the design of self-adaptive computing systems, the PSCEL language offers a principled approach that relies on declarative definitions of adaptation and authorisation policies enforced at runtime. Policies permit managing system components by regulating their interactions...... and by dynamically introducing new actions to accomplish task-oriented goals. However, the runtime evaluation of policies and their effects on system components make the prediction of system behaviour challenging. In this paper, we introduce the construction of a flow graph that statically points out the policy...... evaluations that can take place at runtime and exploit it to analyse the effects of policy evaluations on the progress of system components....

  17. Texture evolution maps for upset deformation of body-centered cubic metals

    International Nuclear Information System (INIS)

    Lee, Myoung-Gyu; Wang, Jue; Anderson, Peter M.

    2007-01-01

    Texture evolution maps are used as a tool to visualize texture development during upset deformation in body-centered cubic metals. These maps reveal initial grain orientations that tend toward normal direction (ND)|| versus ND|| . To produce these maps, a finite element analysis (FEA) with a rate-dependent crystal plasticity constitutive relation for tantalum is used. A reference case having zero workpiece/die friction shows that ∼64% of randomly oriented grains rotate toward ND|| and ∼36% rotate toward ND|| . The maps show well-established trends that increasing strain rate sensitivity and decreasing latent-to-self hardening ratio reduce both and percentages, leading to more diffuse textures. Reducing operative slip systems from both {1 1 0}/ and {1 1 2}/ to just {1 1 0}/ has a mixed effect: it increases the percentage but decreases the percentage. Reducing the number of slip systems and increasing the number of FEA integration points per grain strengthen - texture bands that are observed experimentally

  18. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other.

    Science.gov (United States)

    Brooks, Kevin R; Mond, Jonathan M; Stevenson, Richard J; Stephen, Ian D

    2016-01-01

    Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the "thin ideal" has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of "self" and "other" are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one's own body following exposure to "thin" and to "fat" others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size.

  19. Response of two-band systems to a single-mode quantized field

    Science.gov (United States)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  20. Fractional quantization and the quantum hall effect

    International Nuclear Information System (INIS)

    Guerrero, J.; Calixto, M.; Aldaya, V.

    1998-01-01

    Quantization with constrains is considered in a group-theoretical framework, providing a precise characterization of the set of good operators, i.e., those preserving the constrained Hilbert space, in terms of the representation of the subgroup of constraints. This machinery is applied to the quantization of the torus as symplectic manifold, obtaining that fractional quantum numbers are permitted, provided that we allow for vector valued representations. The good operators turn out to be the Wilson loops and, for certain representations of the subgroup of constraints, the modular transformations. These results are applied to the Fractional Quantum Hall Effect, where interesting implications are derived

  1. Surface inspection of flat products by means of texture analysis: on-line implementation using neural networks

    Science.gov (United States)

    Fernandez, Carlos; Platero, Carlos; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    This paper describes some texture-based techniques that can be applied to quality assessment of flat products continuously produced (metal strips, wooden surfaces, cork, textile products, ...). Since the most difficult task is that of inspecting for product appearance, human-like inspection ability is required. A common feature to all these products is the presence of non- deterministic texture on their surfaces. Two main subjects are discussed: statistical techniques for both surface finishing determination and surface defect analysis as well as real-time implementation for on-line inspection in high-speed applications. For surface finishing determination a Gray Level Difference technique is presented to perform over low resolution images, that is, no-zoomed images. Defect analysis is performed by means of statistical texture analysis over defective portions of the surface. On-line implementation is accomplished by means of neural networks. When a defect arises, textural analysis is applied which result in a data-vector, acting as input of a neural net, previously trained in a supervised way. This approach tries to reach on-line performance in automated visual inspection applications when texture is presented in flat product surfaces.

  2. Reversible Vector Ratchet Effect in Skyrmion Systems

    Science.gov (United States)

    Ma, Xiaoyu; Reichhardt, Charles; Reichhardt, Cynthia

    Magnetic skyrmions are topological non-trivial spin textures found in several magnetic materials. Since their motion can be controlled using ultralow current densities, skyrmions are appealing for potential applications in spintronics as information carriers and processing devices. In this work, we studied the collective transport properties of driven skyrmions based on a particle-like model with molecular dynamics (MD) simulation. Our results show that ac driven skyrmions interacting with an asymmetric substrate provide a realization of a new class of ratchet system, which we call a vector ratchet, that arises due to the effect of the Magnus term on the skyrmion dynamics. In a vector ratchet, the dc motion induced by the ac drive can be described as a vector that can be rotated up to 360 degrees relative to the substrate asymmetry direction. This could represent a new method for controlling skyrmion motion for spintronic applications.

  3. Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps

    Science.gov (United States)

    Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong

    2018-02-01

    Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.

  4. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features.

    LENUS (Irish Health Repository)

    DiFranco, Matthew D

    2011-01-01

    We present a tile-based approach for producing clinically relevant probability maps of prostatic carcinoma in histological sections from radical prostatectomy. Our methodology incorporates ensemble learning for feature selection and classification on expert-annotated images. Random forest feature selection performed over varying training sets provides a subset of generalized CIEL*a*b* co-occurrence texture features, while sample selection strategies with minimal constraints reduce training data requirements to achieve reliable results. Ensembles of classifiers are built using expert-annotated tiles from training images, and scores for the probability of cancer presence are calculated from the responses of each classifier in the ensemble. Spatial filtering of tile-based texture features prior to classification results in increased heat-map coherence as well as AUC values of 95% using ensembles of either random forests or support vector machines. Our approach is designed for adaptation to different imaging modalities, image features, and histological decision domains.

  5. R-GPU : A reconfigurable GPU architecture

    NARCIS (Netherlands)

    van den Braak, G.J.; Corporaal, H.

    2016-01-01

    Over the last decade, Graphics Processing Unit (GPU) architectures have evolved from a fixed-function graphics pipeline to a programmable, energy-efficient compute accelerator for massively parallel applications. The compute power arises from the GPU's Single Instruction/Multiple Threads

  6. High-quality and interactive animations of 3D time-varying vector fields.

    Science.gov (United States)

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  7. Separate processing of texture and form in the ventral stream: evidence from FMRI and visual agnosia.

    Science.gov (United States)

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-02-01

    Real-life visual object recognition requires the processing of more than just geometric (shape, size, and orientation) properties. Surface properties such as color and texture are equally important, particularly for providing information about the material properties of objects. Recent neuroimaging research suggests that geometric and surface properties are dealt with separately within the lateral occipital cortex (LOC) and the collateral sulcus (CoS), respectively. Here we compared objects that differed either in aspect ratio or in surface texture only, keeping all other visual properties constant. Results on brain-intact participants confirmed that surface texture activates an area in the posterior CoS, quite distinct from the area activated by shape within LOC. We also tested 2 patients with visual object agnosia, one of whom (DF) performed well on the texture task but at chance on the shape task, whereas the other (MS) showed the converse pattern. This behavioral double dissociation was matched by a parallel neuroimaging dissociation, with activation in CoS but not LOC in patient DF and activation in LOC but not CoS in patient MS. These data provide presumptive evidence that the areas respectively activated by shape and texture play a causally necessary role in the perceptual discrimination of these features.

  8. Adaptive Near-Optimal Multiuser Detection Using a Stochastic and Hysteretic Hopfield Net Receiver

    Directory of Open Access Journals (Sweden)

    Gábor Jeney

    2003-01-01

    Full Text Available This paper proposes a novel adaptive MUD algorithm for a wide variety (practically any kind of interference limited systems, for example, code division multiple access (CDMA. The algorithm is based on recently developed neural network techniques and can perform near optimal detection in the case of unknown channel characteristics. The proposed algorithm consists of two main blocks; one estimates the symbols sent by the transmitters, the other identifies each channel of the corresponding communication links. The estimation of symbols is carried out either by a stochastic Hopfield net (SHN or by a hysteretic neural network (HyNN or both. The channel identification is based on either the self-organizing feature map (SOM or the learning vector quantization (LVQ. The combination of these two blocks yields a powerful real-time detector with near optimal performance. The performance is analyzed by extensive simulations.

  9. COMPRESS - a computerized reactor safety system

    International Nuclear Information System (INIS)

    Vegh, E.

    1986-01-01

    The computerized reactor safety system, called COMPRESS, provides the following services: scram initiation; safety interlockings; event recording. The paper describes the architecture of the system and deals with reliability problems. A self-testing unit checks permanently the correct operation of the independent decision units. Moreover the decision units are tested by short pulses whether they can initiate a scram. The self-testing is described in detail

  10. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    Energy Technology Data Exchange (ETDEWEB)

    Salvadore, Francesco [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy); Bernardini, Matteo, E-mail: matteo.bernardini@uniroma1.it [Department of Mechanical and Aerospace Engineering, University of Rome ‘La Sapienza’ – via Eudossiana 18, 00184 Rome (Italy); Botti, Michela [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy)

    2013-02-15

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier–Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  11. Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS

    Science.gov (United States)

    Landsman, N. P.

    Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.

  12. An Online System for Remote SHM Operation with Content Adaptive Signal Compression

    OpenAIRE

    Westerkamp , Clemens; Hennewig , Alexander; Speckmann , Holger; Bisle , Wolfgang; Colin , Nicolas; Rafrafi , Mona

    2014-01-01

    International audience; Remote engineering systems are valuable tools to give visual assistance and remote support e.g. in NDT (Non-destructive Testing) or SHM (Structural Health Monitoring). They allow discussing a second opinion with a remote expert and thus reducing the human factor during testing and monitoring. For an optimal impression of the situation, the second person requires both a camera view of the location and the screen view of the system used. The OMA system (Online Maintenanc...

  13. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    Science.gov (United States)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  14. Body Image Distortion and Exposure to Extreme Body Types: Contingent Adaptation and Cross Adaptation for Self and Other

    Directory of Open Access Journals (Sweden)

    Kevin R. Brooks

    2016-07-01

    Full Text Available Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the thin ideal has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of self and other are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g. expanded other/contracted self, and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one’s own body following exposure to thin and to fat others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size.

  15. BRST operator quantization of generally covariant gauge systems

    International Nuclear Information System (INIS)

    Ferraro, R.; Sforza, D.M.

    1997-01-01

    The BRST generator is realized as a Hermitian nilpotent operator for a finite-dimensional gauge system featuring a quadratic super-Hamiltonian and linear supermomentum constraints. As a result, the emerging ordering for the Hamiltonian constraint is not trivial, because the potential must enter the kinetic term in order to obtain a quantization invariant under scaling. Namely, BRST quantization does not lead to the curvature term used in the literature as a means to get that invariance. The inclusion of the potential in the kinetic term, far from being unnatural, is beautifully justified in light of the Jacobi's principle. copyright 1997 The American Physical Society

  16. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  17. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  18. 11th Workshop on Self-Organizing Maps

    CERN Document Server

    Mendenhall, Michael; O'Driscoll, Patrick

    2016-01-01

    This book contains the articles from the international conference 11th Workshop on Self-Organizing Maps 2016 (WSOM 2016), held at Rice University in Houston, Texas, 6-8 January 2016. WSOM is a biennial international conference series starting with WSOM'97 in Helsinki, Finland, under the guidance and direction of Professor Tuevo Kohonen (Emeritus Professor, Academy of Finland). WSOM brings together the state-of-the-art theory and applications in Competitive Learning Neural Networks: SOMs, LVQs and related paradigms of unsupervised and supervised vector quantization. The current proceedings present the expert body of knowledge of 93 authors from 15 countries in 31 peer reviewed contributions. It includes papers and abstracts from the WSOM 2016 invited speakers representing leading researchers in the theory and real-world applications of Self-Organizing Maps and Learning Vector Quantization: Professor Marie Cottrell (Universite Paris 1 Pantheon Sorbonne, France), Professor Pablo Estevez (University of Chile and ...

  19. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  20. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    Science.gov (United States)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  1. Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations

    Science.gov (United States)

    Bernaschi, M.; Bisson, M.; Salvadore, F.

    2014-10-01

    We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.

  2. SUBSURFACE VISUAL ALARM SYSTEM ANALYSIS

    International Nuclear Information System (INIS)

    D.W. Markman

    2001-01-01

    of difficulty and complexity in determining requirements in adapting existing data communication highways to support the subsurface visual alarm system. These requirements would include such things as added or new communication cables, added Programmable Logic Controller (PLC), Inputs and Outputs (I/O), and communication hardware components, and human machine interfaces and their software operating system. (4) Select the best data communication highway system based on this review of adapting or integrating with existing data communication systems

  3. SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T; Kim, D; Kang, S; Cho, M; Kim, K; Shin, D; Noh, Y; Suh, T [Department of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kim, S [Virginia Commonwealth University, Richmond, VA (United States)

    2016-06-15

    Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unit and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation

  4. SU-F-T-91: Development of Real Time Abdominal Compression Force (ACF) Monitoring System

    International Nuclear Information System (INIS)

    Kim, T; Kim, D; Kang, S; Cho, M; Kim, K; Shin, D; Noh, Y; Suh, T; Kim, S

    2016-01-01

    Purpose: Hard-plate based abdominal compression is known to be effective, but no explicit method exists to quantify abdominal compression force (ACF) and maintain the proper ACF through the whole procedure. In addition, even with compression, it is necessary to do 4D CT to manage residual motion but, 4D CT is often not possible due to reduced surrogating sensitivity. In this study, we developed and evaluated a system that both monitors ACF in real time and provides surrogating signal even under compression. The system can also provide visual-biofeedback. Methods: The system developed consists of a compression plate, an ACF monitoring unit and a visual-biofeedback device. The ACF monitoring unit contains a thin air balloon in the size of compression plate and a gas pressure sensor. The unit is attached to the bottom of the plate thus, placed between the plate and the patient when compression is applied, and detects compression pressure. For reliability test, 3 volunteers were directed to take several different breathing patterns and the ACF variation was compared with the respiratory flow and external respiratory signal to assure that the system provides corresponding behavior. In addition, guiding waveform were generated based on free breathing, and then applied for evaluating the effectiveness of visual-biofeedback. Results: We could monitor ACF variation in real time and confirmed that the data was correlated with both respiratory flow data and external respiratory signal. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed real time ACF monitoring system was found to be functional as intended and consistent. With the capability of both providing real time surrogating signal under compression and enabling visual-biofeedback, it is considered that the system would improve the quality of respiratory motion management in radiation

  5. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    International Nuclear Information System (INIS)

    Tian, Zhen; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B.; Peng, Fei

    2015-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  6. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun; Jia, Xun, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Jiang, Steve B., E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Peng, Fei [Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  7. Vector mesons on the light front

    International Nuclear Information System (INIS)

    Naito, K.; Maedan, S.; Itakura, K.

    2004-01-01

    We apply the light-front quantization to the Nambu-Jona-Lasinio model with the vector interaction, and compute vector meson's mass and light-cone wavefunction in the large N limit. Following the same procedure as in the previous analyses for scalar and pseudo-scalar mesons, we derive the bound-state equations of a qq-bar system in the vector channel. We include the lowest order effects of the vector interaction. The resulting transverse and longitudinal components of the bound-state equation look different from each other. But eventually after imposing an appropriate cutoff, one finds these two are identical, giving the same mass and the same (spin-independent) light-cone wavefunction. Mass of the vector meson decreases as one increases the strength of the vector interaction

  8. Multispectral data compression through transform coding and block quantization

    Science.gov (United States)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  9. IDENTIFIKASI IRIS MATA MENGGUNAKAN TAPIS GABOR WAVELET DAN JARINGAN SYARAF TIRUAN LEARNING VECTOR QUANTIZATION (LVQ

    Directory of Open Access Journals (Sweden)

    Budi Setiyono

    2012-02-01

    Full Text Available Biometric represents the human identification method development using natural characteristic of humanbeing as its bases. Every iris has the detail and unique texture, even differ between right and left eye.Theeye iris identification process in this research are data acquisition, early processing, feature exctractionand classification. Algorithm used for classification of texture slice the eye is Gabor wavelet filtering, andclassification process of slice the eye texture will be used by a Artificial Neural Network LVQ. Recognitionthe value of feature vektor in each iris obtained from to the number of right recognition value or thepercentage of right one. The best recognition percentage is 87,5 %.

  10. Adaptation of saccadic sequences with and without remapping

    Directory of Open Access Journals (Sweden)

    Delphine Lévy-Bencheton

    2016-07-01

    Full Text Available It is relatively easy to adapt visually-guided saccades because the visual vector and the saccade vector match. The retinal error at the saccade landing position is compared to the prediction error, based on target location and efference copy. If these errors do not match, planning processes at the level(s of the visual and/or motor vectors processing are assumed to be inaccurate and the saccadic response is adjusted. In the case of a sequence of two saccades, the final error can be attributed to the last saccade vector or to the entire saccadic displacement. Here, we asked whether and how adaptation can occur in the case of remapped saccades, such as during the classic double-step saccade paradigm, where the visual and motor vectors of the second saccade do not coincide and so the attribution of error is ambiguous. Participants performed saccades sequences to two targets briefly presented prior to first saccade onset. The second saccade target was either briefly re-illuminated (visually-guided paradigm or not (remapping paradigm upon first saccade offset. To drive adaptation, the second target was presented at a displaced location (backward or forward jump condition or control – no jump at the end of the second saccade. Pre- and post-adaptation trials were identical, without the re-appearance of the target after the second saccade. For the 1st saccade endpoints, there was no change as a function of adaptation. For the 2nd saccade, there was a similar increase in gain in the forward jump condition (52% and 61% of target jump in the two paradigms, whereas the gain decrease in the backward condition was much smaller for the remapping paradigm than for the visually-guided paradigm (41% vs. 94%. In other words, the absolute gain change was similar between backward and forward adaptation for remapped saccades.In conclusion, we show that remapped saccades can be adapted, suggesting that the error is attributed to the visuo-motor transformation of

  11. Adaptive optics without altering visual perception.

    Science.gov (United States)

    Koenig, D E; Hart, N W; Hofer, H J

    2014-04-01

    Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Evolutionary adaptations: theoretical and practical implications for visual ergonomics.

    Science.gov (United States)

    Fostervold, Knut Inge; Watten, Reidulf G; Volden, Frode

    2014-01-01

    The literature discussing visual ergonomics often mention that human vision is adapted to light emitted by the sun. However, theoretical and practical implications of this viewpoint is seldom discussed or taken into account. The paper discusses some of the main theoretical implications of an evolutionary approach to visual ergonomics. Based on interactional theory and ideas from ecological psychology an evolutionary stress model is proposed as a theoretical framework for future research in ergonomics and human factors. The model stresses the importance of developing work environments that fits with our evolutionary adaptations. In accordance with evolutionary psychology, the environment of evolutionary adaptedness (EEA) and evolutionarily-novel environments (EN) are used as key concepts. Using work with visual display units (VDU) as an example, the paper discusses how this knowledge can be utilized in an ergonomic analysis of risk factors in the work environment. The paper emphasises the importance of incorporating evolutionary theory in the field of ergonomics. Further, the paper encourages scientific practices that further our understanding of any phenomena beyond the borders of traditional proximal explanations.

  13. Edge compression techniques for visualization of dense directed graphs.

    Science.gov (United States)

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  14. Computational identification of adaptive mutants using the VERT system

    Directory of Open Access Journals (Sweden)

    Winkler James

    2012-04-01

    Full Text Available Background Evolutionary dynamics of microbial organisms can now be visualized using the Visualizing Evolution in Real Time (VERT system, in which several isogenic strains expressing different fluorescent proteins compete during adaptive evolution and are tracked using fluorescent cell sorting to construct a population history over time. Mutations conferring enhanced growth rates can be detected by observing changes in the fluorescent population proportions. Results Using data obtained from several VERT experiments, we construct a hidden Markov-derived model to detect these adaptive events in VERT experiments without external intervention beyond initial training. Analysis of annotated data revealed that the model achieves consensus with human annotation for 85-93% of the data points when detecting adaptive events. A method to determine the optimal time point to isolate adaptive mutants is also introduced. Conclusions The developed model offers a new way to monitor adaptive evolution experiments without the need for external intervention, thereby simplifying adaptive evolution efforts relying on population tracking. Future efforts to construct a fully automated system to isolate adaptive mutants may find the algorithm a useful tool.

  15. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure

    KAUST Repository

    Labschutz, Matthias

    2015-08-12

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  16. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure

    KAUST Repository

    Labschutz, Matthias; Bruckner, Stefan; Groller, M. Eduard; Hadwiger, Markus; Rautek, Peter

    2015-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  17. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    Science.gov (United States)

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  18. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  19. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  20. Semi-classical derivation of charge-quantization through charge-field self-interaction

    International Nuclear Information System (INIS)

    Kosok, M.; Madhyastha, V.L.

    1990-01-01

    A semi-classical synthesis of classical mechanics, wave mechanics, and special relativity yields a unique nonlinear energy-wave structure of relations (velocity triad uv = c 2 ) fundamental to modern physics. Through the above vehicle, using Maxwell's equations, charge quantization and the fine structure constant are derived. It is shown that the numerical value of the nonlinear charge-field self-interaction range for the electron is of the order of 10 -13 m, which is greater than the classical electron radius but less than the Compton wavelength of the electron. Finally, it is suggested that the structure of the electron-in-space is expressed by a self-extending nonlinear ''fractal geometry'' based on derived numerical values obtained from our model, thus opening this presentation of charge-field structure to experimental testing for possible verification

  1. Quantum Computing and Second Quantization

    International Nuclear Information System (INIS)

    Makaruk, Hanna Ewa

    2017-01-01

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  2. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    Science.gov (United States)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  3. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  4. The BRST formalism and the quantization of hamiltonian systems with first class constraints

    International Nuclear Information System (INIS)

    Gamboa, J.; Rivelles, V.O.

    1989-04-01

    The quantization of hamiltonian system with first class constraints using the BFV formalism is studied. Two examples, the quantization of the relativistic particle and the relativistic spinning particle, are worked out in detail, showing that the BFV formalism is a powerful method for quantizing theories with gauge freedom. Several points not discussed is the literature are pointed out and the correct expression for the Feynman propagator in both cases is found. (L.C.)

  5. Computer-aided diagnosis with textural features for breast lesions in sonograms.

    Science.gov (United States)

    Chen, Dar-Ren; Huang, Yu-Len; Lin, Sheng-Hsiung

    2011-04-01

    Computer-aided diagnosis (CAD) systems provided second beneficial support reference and enhance the diagnostic accuracy. This paper was aimed to develop and evaluate a CAD with texture analysis in the classification of breast tumors for ultrasound images. The ultrasound (US) dataset evaluated in this study composed of 1020 sonograms of region of interest (ROI) subimages from 255 patients. Two-view sonogram (longitudinal and transverse views) and four different rectangular regions were utilized to analyze each tumor. Six practical textural features from the US images were performed to classify breast tumors as benign or malignant. However, the textural features always perform as a high dimensional vector; high dimensional vector is unfavorable to differentiate breast tumors in practice. The principal component analysis (PCA) was used to reduce the dimension of textural feature vector and then the image retrieval technique was performed to differentiate between benign and malignant tumors. In the experiments, all the cases were sampled with k-fold cross-validation (k=10) to evaluate the performance with receiver operating characteristic (ROC) curve. The area (A(Z)) under the ROC curve for the proposed CAD system with the specific textural features was 0.925±0.019. The classification ability for breast tumor with textural information is satisfactory. This system differentiates benign from malignant breast tumors with a good result and is therefore clinically useful to provide a second opinion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. On quantization of relativistic string theory

    International Nuclear Information System (INIS)

    Isaev, A.P.

    1982-01-01

    Quantization of the relativistic string theory based on methods of the constrained Hamiltonian systems quantization is considered. Connections of this approach and Polyakov's quantization are looked. New representation of a loop heat kernel is obtained

  7. Stochastic quantization for the axial model

    International Nuclear Information System (INIS)

    Farina, C.; Montani, H.; Albuquerque, L.C.

    1991-01-01

    We use bosonization ideas to solve the axial model in the stochastic quantization framework. We obtain the fermion propagator of the theory decoupling directly the Langevin equation, instead of the Fokker-Planck equation. In the Appendix we calculate explicitly the anomalous divergence of the axial-vector current by using a regularization that does not break the Markovian character of the stochastic process

  8. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  9. Role of the precuneus in the detection of incongruency between tactile and visual texture information: A functional MRI study.

    Science.gov (United States)

    Kitada, Ryo; Sasaki, Akihiro T; Okamoto, Yuko; Kochiyama, Takanori; Sadato, Norihiro

    2014-11-01

    Visual clues as to the physical substance of manufactured objects can be misleading. For example, a plastic ring can appear to be made of gold. However, we can avoid misidentifying an object׳s substance by comparing visual and tactile information. As compared to the spatial properties of an object (e.g., orientation), however, little information regarding physical object properties (material properties) is shared between vision and touch. How can such different kinds of information be compared in the brain? One possibility is that the visuo-tactile comparison of material information is mediated by associations that are previously learned between the two modalities. Previous studies suggest that a cortical network involving the medial temporal lobe and precuneus plays a critical role in the retrieval of information from long-term memory. Here, we used functional magnetic resonance imaging (fMRI) to test whether these brain regions are involved in the visuo-tactile comparison of material properties. The stimuli consisted of surfaces in which an oriented plastic bar was placed on a background texture. Twenty-two healthy participants determined whether the orientations of visually- and tactually-presented bar stimuli were congruent in the orientation conditions, and whether visually- and tactually-presented background textures were congruent in the texture conditions. The texture conditions revealed greater activation of the fusiform gyrus, medial temporal lobe and lateral prefrontal cortex compared with the orientation conditions. In the texture conditions, the precuneus showed greater response to incongruent stimuli than to congruent stimuli. This incongruency effect was greater for the texture conditions than for the orientation conditions. These results suggest that the precuneus is involved in detecting incongruency between tactile and visual texture information in concert with the medial temporal lobe, which is tightly linked with long-term memory. Copyright

  10. Real-time Deformation of Detailed Geometry Based on Mappings to a Less Detailed Physical Simulation on the GPU

    DEFF Research Database (Denmark)

    Mosegaard, Jesper; Sørensen, Thomas Sangild

    2005-01-01

    Modern graphics processing units (GPUs) can be effectively used to solve physical systems. To use the GPUoptimally, the discretization of the physical system is often restricted to a regular grid. When grid values representspatial positions, a direct visualization can result in a jagged appearance....... In this paper we propose todecouple computation and visualization of such systems. We define mappings that enable the deformation of ahigh-resolution surface based on a physical simulation on a lower resolution uniform grid. More specifically weinvestigate new approaches for the visualization of a GPU based...

  11. A Conceptual Framework for Planning Systemic Human Adaptation to Global Warming.

    Science.gov (United States)

    Tait, Peter W; Hanna, Elizabeth G

    2015-08-31

    Human activity is having multiple, inter-related effects on ecosystems. Greenhouse gas emissions persisting along current trajectories threaten to significantly alter human society. At 0.85 °C of anthropogenic warming, deleterious human impacts are acutely evident. Additional warming of 0.5 °C-1.0 °C from already emitted CO₂ will further intensify extreme heat and damaging storm events. Failing to sufficiently address this trend will have a heavy human toll directly and indirectly on health. Along with mitigation efforts, societal adaptation to a warmer world is imperative. Adaptation efforts need to be significantly upscaled to prepare society to lessen the public health effects of rising temperatures. Modifying societal behaviour is inherently complex and presents a major policy challenge. We propose a social systems framework for conceptualizing adaptation that maps out three domains within the adaptation policy landscape: acclimatisation, behavioural adaptation and technological adaptation, which operate at societal and personal levels. We propose that overlaying this framework on a systems approach to societal change planning methods will enhance governments' capacity and efficacy in strategic planning for adaptation. This conceptual framework provides a policy oriented planning assessment tool that will help planners match interventions to the behaviours being targeted for change. We provide illustrative examples to demonstrate the framework's application as a planning tool.

  12. Lsiviewer 2.0 - a Client-Oriented Online Visualization Tool for Geospatial Vector Data

    Science.gov (United States)

    Manikanta, K.; Rajan, K. S.

    2017-09-01

    Geospatial data visualization systems have been predominantly through applications that are installed and run in a desktop environment. Over the last decade, with the advent of web technologies and its adoption by Geospatial community, the server-client model for data handling, data rendering and visualization respectively has been the most prevalent approach in Web-GIS. While the client devices have become functionally more powerful over the recent years, the above model has largely ignored it and is still in a mode of serverdominant computing paradigm. In this paper, an attempt has been made to develop and demonstrate LSIViewer - a simple, easy-to-use and robust online geospatial data visualisation system for the user's own data that harness the client's capabilities for data rendering and user-interactive styling, with a reduced load on the server. The developed system can support multiple geospatial vector formats and can be integrated with other web-based systems like WMS, WFS, etc. The technology stack used to build this system is Node.js on the server side and HTML5 Canvas and JavaScript on the client side. Various tests run on a range of vector datasets, upto 35 MB, showed that the time taken to render the vector data using LSIViewer is comparable to a desktop GIS application, QGIS, over an identical system.

  13. CUDAICA: GPU Optimization of Infomax-ICA EEG Analysis

    Directory of Open Access Journals (Sweden)

    Federico Raimondo

    2012-01-01

    Full Text Available In recent years, Independent Component Analysis (ICA has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation.

  14. Adaptive quantization of local field potentials for wireless implants in freely moving animals: an open-source neural recording device

    Science.gov (United States)

    Martinez, Dominique; Clément, Maxime; Messaoudi, Belkacem; Gervasoni, Damien; Litaudon, Philippe; Buonviso, Nathalie

    2018-04-01

    Objective. Modern neuroscience research requires electrophysiological recording of local field potentials (LFPs) in moving animals. Wireless transmission has the advantage of removing the wires between the animal and the recording equipment but is hampered by the large number of data to be sent at a relatively high rate. Approach. To reduce transmission bandwidth, we propose an encoder/decoder scheme based on adaptive non-uniform quantization. Our algorithm uses the current transmitted codeword to adapt the quantization intervals to changing statistics in LFP signals. It is thus backward adaptive and does not require the sending of side information. The computational complexity is low and similar at the encoder and decoder sides. These features allow for real-time signal recovery and facilitate hardware implementation with low-cost commercial microcontrollers. Main results. As proof-of-concept, we developed an open-source neural recording device called NeRD. The NeRD prototype digitally transmits eight channels encoded at 10 kHz with 2 bits per sample. It occupies a volume of 2  ×  2  ×  2 cm3 and weighs 8 g with a small battery allowing for 2 h 40 min of autonomy. The power dissipation is 59.4 mW for a communication range of 8 m and transmission losses below 0.1%. The small weight and low power consumption offer the possibility of mounting the entire device on the head of a rodent without resorting to a separate head-stage and battery backpack. The NeRD prototype is validated in recording LFPs in freely moving rats at 2 bits per sample while maintaining an acceptable signal-to-noise ratio (>30 dB) over a range of noisy channels. Significance. Adaptive quantization in neural implants allows for lower transmission bandwidths while retaining high signal fidelity and preserving fundamental frequencies in LFPs.

  15. Quantization of soluble classical constrained systems

    International Nuclear Information System (INIS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-01-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way

  16. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  17. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    Science.gov (United States)

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  18. Adaptive behavior of children with visual impairment

    Directory of Open Access Journals (Sweden)

    Anđelković Marija

    2014-01-01

    Full Text Available Adaptive behavior includes a wide range of skills necessary for independent, safe and adequate performance of everyday activities. Practical, social and conceptual skills make the concept of adaptive behavior. The aim of this paper is to provide an insight into the existing studies of adaptive behavior in persons with visual impairment. The paper mainly focuses on the research on adaptive behavior in children with visual impairment. The results show that the acquisition of adaptive skills is mainly low or moderately low in children and youth with visual impairment. Children with visual impairment achieve the worst results in social skills and everyday life skills, while the most acquired are communication skills. Apart from the degree of visual impairment, difficulties in motor development also significantly influence the acquisition of practical and social skills of blind persons and persons with low vision.

  19. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  20. Breast cancer risk assessment and diagnosis model using fuzzy support vector machine based expert system

    Science.gov (United States)

    Dheeba, J.; Jaya, T.; Singh, N. Albert

    2017-09-01

    Classification of cancerous masses is a challenging task in many computerised detection systems. Cancerous masses are difficult to detect because these masses are obscured and subtle in mammograms. This paper investigates an intelligent classifier - fuzzy support vector machine (FSVM) applied to classify the tissues containing masses on mammograms for breast cancer diagnosis. The algorithm utilises texture features extracted using Laws texture energy measures and a FSVM to classify the suspicious masses. The new FSVM treats every feature as both normal and abnormal samples, but with different membership. By this way, the new FSVM have more generalisation ability to classify the masses in mammograms. The classifier analysed 219 clinical mammograms collected from breast cancer screening laboratory. The tests made on the real clinical mammograms shows that the proposed detection system has better discriminating power than the conventional support vector machine. With the best combination of FSVM and Laws texture features, the area under the Receiver operating characteristic curve reached .95, which corresponds to a sensitivity of 93.27% with a specificity of 87.17%. The results suggest that detecting masses using FSVM contribute to computer-aided detection of breast cancer and as a decision support system for radiologists.