WorldWideScience

Sample records for cipher coded imager

  1. CIPHER: coded imager and polarimeter for high-energy radiation

    CERN Document Server

    Caroli, E; Dusi, W; Bertuccio, G; Sampietro, M

    2000-01-01

    The CIPHER instrument is a hard X- and soft gamma-ray spectroscopic and polarimetric coded mask imager based on an array of cadmium telluride micro-spectrometers. The position-sensitive detector (PSD) will be arranged in 4 modules of 32x32 crystals, each of 2x2 mm sup 2 cross section and 10 mm thickness giving a total active area of about 160 cm sup 2. The micro-spectrometer characteristics allow a wide operating range from approx 10 keV to 1 MeV, while the PSD is actively shielded by CsI crystals on the bottom in order to reduce background. The mask, based on a modified uniformly redundant array (MURA) pattern, is four times the area of the PSD and is situated at about 100 cm from the CdTe array top surface. The CIPHER instrument is proposed for a balloon experiment, both in order to assess the performance of such an instrumental concept for a small/medium-size satellite survey mission and to perform an innovative measurement of the Crab polarisation level. The CIPHER's field of view allows the instrument to...

  2. A New Image Encryption Technique Combining Hill Cipher Method, Morse Code and Least Significant Bit Algorithm

    Science.gov (United States)

    Nofriansyah, Dicky; Defit, Sarjon; Nurcahyo, Gunadi W.; Ganefri, G.; Ridwan, R.; Saleh Ahmar, Ansari; Rahim, Robbi

    2018-01-01

    Cybercrime is one of the most serious threats. Efforts are made to reduce the number of cybercrime is to find new techniques in securing data such as Cryptography, Steganography and Watermarking combination. Cryptography and Steganography is a growing data security science. A combination of Cryptography and Steganography is one effort to improve data integrity. New techniques are used by combining several algorithms, one of which is the incorporation of hill cipher method and Morse code. Morse code is one of the communication codes used in the Scouting field. This code consists of dots and lines. This is a new modern and classic concept to maintain data integrity. The result of the combination of these three methods is expected to generate new algorithms to improve the security of the data, especially images.

  3. A Novel Image Stream Cipher Based On Dynamic Substitution

    OpenAIRE

    Elsharkawi, A.; El-Sagheer, R. M.; Akah, H.; Taha, H.

    2016-01-01

    Recently, many chaos-based stream cipher algorithms have been developed. Traditional chaos stream cipher is based on XORing a generated secure random number sequence based on chaotic maps (e.g. logistic map, Bernoulli Map, Tent Map etc.) with the original image to get the encrypted image, This type of stream cipher seems to be vulnerable to chosen plaintext attacks. This paper introduces a new stream cipher algorithm based on dynamic substitution box. The new algorithm uses one substitution b...

  4. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  5. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.; Radwan, Ahmed G.; Salama, Khaled N.

    2011-01-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  6. Hardware realization of chaos based block cipher for image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2011-12-01

    Unlike stream ciphers, block ciphers are very essential for parallel processing applications. In this paper, the first hardware realization of chaotic-based block cipher is proposed for image encryption applications. The proposed system is tested for known cryptanalysis attacks and for different block sizes. When implemented on Virtex-IV, system performance showed high throughput and utilized small area. Passing successfully in all tests, our system proved to be secure with all block sizes. © 2011 IEEE.

  7. Hardware stream cipher with controllable chaos generator for colour image encryption

    KAUST Repository

    Barakat, Mohamed L.

    2014-01-01

    This study presents hardware realisation of chaos-based stream cipher utilised for image encryption applications. A third-order chaotic system with signum non-linearity is implemented and a new post processing technique is proposed to eliminate the bias from the original chaotic sequence. The proposed stream cipher utilises the processed chaotic output to mask and diffuse input pixels through several stages of XORing and bit permutations. The performance of the cipher is tested with several input images and compared with previously reported systems showing superior security and higher hardware efficiency. The system is experimentally verified on XilinxVirtex 4 field programmable gate array (FPGA) achieving small area utilisation and a throughput of 3.62 Gb/s. © The Institution of Engineering and Technology 2013.

  8. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  9. An enhanced chaotic key-based RC5 block cipher adapted to image encryption

    Science.gov (United States)

    Faragallah, Osama S.

    2012-07-01

    RC5 is a block cipher that has several salient features such as adaptability to process different word lengths with a variable block size, a variable number of rounds and a variable-length secret key. However, RC5 can be broken with various attacks such as correlation attack, timing attack, known plaintext correlation attack and differential attacks, revealing weak security. We aimed to enhance the RC5 block cipher to be more secure and efficient for real-time applications while preserving its advantages. For this purpose, this article introduces a new approach based on strengthening both the confusion and diffusion operations by combining chaos and cryptographic primitive operations to produce round keys with better pseudo-random sequences. Comparative security analysis and performance evaluation of the enhanced RC5 block cipher (ERC5) with RC5, RC6 and chaotic block cipher algorithm (CBCA) are addressed. Several test images are used for inspecting the validity of the encryption and decryption algorithms. The experimental results show the superiority of the suggested enhanced RC5 (ERC5) block cipher to image encryption algorithms such as RC5, RC6 and CBCA from the security analysis and performance evaluation points of view.

  10. Hardware stream cipher with controllable chaos generator for colour image encryption

    KAUST Repository

    Barakat, Mohamed L.; Mansingka, Abhinav S.; Radwan, Ahmed Gomaa; Salama, Khaled N.

    2014-01-01

    This study presents hardware realisation of chaos-based stream cipher utilised for image encryption applications. A third-order chaotic system with signum non-linearity is implemented and a new post processing technique is proposed to eliminate

  11. A fast image encryption algorithm based on only blocks in cipher text

    Science.gov (United States)

    Wang, Xing-Yuan; Wang, Qian

    2014-03-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.

  12. A fast image encryption algorithm based on only blocks in cipher text

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Wang Qian

    2014-01-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks

  13. A Novel Image Encryption Scheme Based on Self-Synchronous Chaotic Stream Cipher and Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Chunlei Fan

    2018-06-01

    Full Text Available In this paper, a novel image encryption scheme is proposed for the secure transmission of image data. A self-synchronous chaotic stream cipher is designed with the purpose of resisting active attack and ensures the limited error propagation of image data. Two-dimensional discrete wavelet transform and Arnold mapping are used to scramble the pixel value of the original image. A four-dimensional hyperchaotic system with four positive Lyapunov exponents serve as the chaotic sequence generator of the self-synchronous stream cipher in order to enhance the security and complexity of the image encryption system. Finally, the simulation experiment results show that this image encryption scheme is both reliable and secure.

  14. A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher

    Science.gov (United States)

    Kumari, Manju; Gupta, Shailender

    2018-03-01

    As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.

  15. Penerapan CIELab dan Chaos sebagai Cipher pada Aplikasi Kriptografi Citra Digital

    Directory of Open Access Journals (Sweden)

    Linna Oktaviana Sari

    2013-04-01

    Full Text Available The development of Internet supports people to transmit information, such as text, images and other media quickly. However, digital images transmitted over the Internet are very vulnerable to attacks, for examples modification and duplication by unauthorized people. Therefore, cryptography as one of method for data security has been developed. This research proposed a combination of color structure CIELab and key randomization by logistic map from chaos as new cipher in digital image cryptographic applications. Cipher is applied to the encryption and decryption process. Implementation of new cipher in cryptographic digital images application was built with Matlab R2010a. Based on the research that has been done, it was found that combination CIELab and chaos can be applied as a new cipher on the encryption and decryption of digital images for cryptographic applications with processing time less than 1 second. Under possible maximum key range on RGB image by 5,2x 1033, the cipher was sufficiently secure against brute-force attack. Decrypted image has good quality with PSNR greater than 50 dB for digital image formatted in “tiff” and “png”.

  16. Implementation of RC5 and RC6 block ciphers on digital images

    International Nuclear Information System (INIS)

    Belhaj Mohamed, A.; Zaibi, G.; Kachouri, A.

    2011-01-01

    With the fast evolution of the networks technology, the security becomes an important research axis. Many types of communication require the transmission of digital images. This transmission must be safe especially in applications that require a fairly high level of security such as military applications, spying, radars, and biometrics applications. Mechanisms for authentication, confidentiality, and integrity must be implemented within their community. For this reason, several cryptographic algorithms have been developed to ensure the safety and reliability of this transmission. In this paper, we investigate the encryption efficiency of RC5 and RC6 block cipher applied to digital images by including a statistical and differential analysis then, and also we investigate those two block ciphers against errors in ambient noise. The security analysis shows that RC6 algorithm is more secure than RC5. However, using RC6 to encrypt images in rough environment (low signal to noise ratio) leads to more errors (almost double of RC5) and may increase energy consumption by retransmitting erroneous packets. A compromise security/energy must be taken into account for the good choice of encryption algorithm.

  17. A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2014-02-01

    Full Text Available Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme

  18. Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm

    Science.gov (United States)

    Rachmawati, D.; Budiman, M. A.; Aulya, L.

    2018-02-01

    Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.

  19. Cracking the Cipher Challenge

    CERN Document Server

    CERN. Geneva. Audiovisual Unit; Singh, Simon

    2002-01-01

    In the back of 'The Code Book', a history of cryptography, Simon Singh included a series of 10 encoded messages, each from a different period of history. The first person to crack all 10 messages would win a prize of £10,000. Now that the prize has been won, Simon can reveal the story behind the Cipher Challenge. Along the way he will show how mathematics can be used to crack codes, the role it played in World War Two and how it helps to guarantee security in the Information Age.

  20. Reflection ciphers

    DEFF Research Database (Denmark)

    Boura, Christina; Canteaut, Anne; Knudsen, Lars Ramkilde

    2017-01-01

    study the necessary properties for this coupling permutation. Special care has to be taken of some related-key distinguishers since, in the context of reflection ciphers, they may provide attacks in the single-key setting.We then derive some criteria for constructing secure reflection ciphers...

  1. Block Cipher Analysis

    DEFF Research Database (Denmark)

    Miolane, Charlotte Vikkelsø

    ensurethat no attack violatesthe securitybounds specifiedbygeneric attack namely exhaustivekey search and table lookup attacks. This thesis contains a general introduction to cryptography with focus on block ciphers and important block cipher designs, in particular the Advanced Encryption Standard(AES...... on small scale variants of AES. In the final part of the thesis we present a new block cipher proposal Present and examine its security against algebraic and differential cryptanalysis in particular....

  2. BLOSTREAM: A HIGH SPEED STREAM CIPHER

    Directory of Open Access Journals (Sweden)

    ALI H. KASHMAR

    2017-04-01

    Full Text Available Although stream ciphers are widely utilized to encrypt sensitive data at fast speeds, security concerns have led to a shift from stream to block ciphers, judging that the current technology in stream cipher is inferior to the technology of block ciphers. This paper presents the design of an improved efficient and secure stream cipher called Blostream, which is more secure than conventional stream ciphers that use XOR for mixing. The proposed cipher comprises two major components: the Pseudo Random Number Generator (PRNG using the Rabbit algorithm and a nonlinear invertible round function (combiner for encryption and decryption. We evaluate its performance in terms of implementation and security, presenting advantages and disadvantages, comparison of the proposed cipher with similar systems and a statistical test for randomness. The analysis shows that the proposed cipher is more efficient, high speed, and secure than current conventional stream ciphers.

  3. A New Substitution Cipher - Random-X

    Directory of Open Access Journals (Sweden)

    Falguni Patel

    2015-08-01

    Full Text Available Ciphers are the encryption methods to prepare the algorithm for encryption and decryption. The currently known ciphers are not strong enough to protect the data. A new substitution cipher Random-X that we introduce in this paper can be used for password encryption and data encryption. Random-X cipher is a unique substitution cipher which replaces the units of plaintext with triplets of letters. The beauty of this cipher is that the encrypted string of the same plain text is not always same. This makes it strong and difficult to crack. This paper covers the principle the implementation ideas and testing of Random-X cipher.

  4. Quantum-noise randomized ciphers

    International Nuclear Information System (INIS)

    Nair, Ranjith; Yuen, Horace P.; Kumar, Prem; Corndorf, Eric; Eguchi, Takami

    2006-01-01

    We review the notion of a classical random cipher and its advantages. We sharpen the usual description of random ciphers to a particular mathematical characterization suggested by the salient feature responsible for their increased security. We describe a concrete system known as αη and show that it is equivalent to a random cipher in which the required randomization is affected by coherent-state quantum noise. We describe the currently known security features of αη and similar systems, including lower bounds on the unicity distances against ciphertext-only and known-plaintext attacks. We show how αη used in conjunction with any standard stream cipher such as the Advanced Encryption Standard provides an additional, qualitatively different layer of security from physical encryption against known-plaintext attacks on the key. We refute some claims in the literature that αη is equivalent to a nonrandom stream cipher

  5. Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains

    Science.gov (United States)

    Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao

    2017-11-01

    A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.

  6. Cryptanalysis of Selected Block Ciphers

    DEFF Research Database (Denmark)

    Alkhzaimi, Hoda A.

    , pseudorandom number generators, and authenticated encryption designs. For this reason a multitude of initiatives over the years has been established to provide a secure and sound designs for block ciphers as in the calls for Data Encryption Standard (DES) and Advanced Encryption Standard (AES), lightweight...... ciphers initiatives, and the Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR). In this thesis, we first present cryptanalytic results on different ciphers. We propose attack named the Invariant Subspace Attack. It is utilized to break the full block cipher...

  7. Cipher image damage and decisions in real time

    Science.gov (United States)

    Silva-García, Victor Manuel; Flores-Carapia, Rolando; Rentería-Márquez, Carlos; Luna-Benoso, Benjamín; Jiménez-Vázquez, Cesar Antonio; González-Ramírez, Marlon David

    2015-01-01

    This paper proposes a method for constructing permutations on m position arrangements. Our objective is to encrypt color images using advanced encryption standard (AES), using variable permutations means a different one for each 128-bit block in the first round after the x-or operation is applied. Furthermore, this research offers the possibility of knowing the original image when the encrypted figure suffered a failure from either an attack or not. This is achieved by permuting the original image pixel positions before being encrypted with AES variable permutations, which means building a pseudorandom permutation of 250,000 position arrays or more. To this end, an algorithm that defines a bijective function between the nonnegative integer and permutation sets is built. From this algorithm, the way to build permutations on the 0,1,…,m-1 array, knowing m-1 constants, is presented. The transcendental numbers are used to select these m-1 constants in a pseudorandom way. The quality of the proposed encryption according to the following criteria is evaluated: the correlation coefficient, the entropy, and the discrete Fourier transform. A goodness-of-fit test for each basic color image is proposed to measure the bits randomness degree of the encrypted figure. On the other hand, cipher images are obtained in a loss-less encryption way, i.e., no JPEG file formats are used.

  8. Cryptanalysis of Some Lightweight Symmetric Ciphers

    DEFF Research Database (Denmark)

    Abdelraheem, Mohamed Ahmed Awadelkareem Mohamed Ahmed

    In recent years, the need for lightweight encryption systems has been increasing as many applications use RFID and sensor networks which have a very low computational power and thus incapable of performing standard cryptographic operations. In response to this problem, the cryptographic community...... on a variant of PRESENT with identical round keys. We propose a new attack named the Invariant Subspace Attack that was specifically mounted against the lightweight block cipher PRINTcipher. Furthermore, we mount several attacks on a recently proposed stream cipher called A2U2....... of the international standards in lightweight cryptography. This thesis aims at analyzing and evaluating the security of some the recently proposed lightweight symmetric ciphers with a focus on PRESENT-like ciphers, namely, the block cipher PRESENT and the block cipher PRINTcipher. We provide an approach to estimate...

  9. Parallelizable and Authenticated Online Ciphers

    DEFF Research Database (Denmark)

    Andreeva, Elena; Bogdanov, Andrey; Luykx, Atul

    2013-01-01

    Online ciphers encrypt an arbitrary number of plaintext blocks and output ciphertext blocks which only depend on the preceding plaintext blocks. All online ciphers proposed so far are essentially serial, which significantly limits their performance on parallel architectures such as modern general......-purpose CPUs or dedicated hardware.We propose the first parallelizable online cipher, COPE. It performs two calls to the underlying block cipher per plaintext block and is fully parallelizable in both encryption and decryption. COPE is proven secure against chosenplaintext attacks assuming the underlying block...... cipher is a strong PRP. Our implementation with Intel AES-NI on a Sandy Bridge CPU architecture shows that both COPE and COPA are about 5 times faster than their closest competition: TC1, TC3, and McOE-G. This high factor of advantage emphasizes the paramount role of parallelizability on up...

  10. A novel chaotic stream cipher and its application to palmprint template protection

    International Nuclear Information System (INIS)

    Heng-Jian, Li; Jia-Shu, Zhang

    2010-01-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher. (general)

  11. A novel chaotic stream cipher and its application to palmprint template protection

    Science.gov (United States)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-04-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.

  12. Cipher block based authentication module: A hardware design perspective

    NARCIS (Netherlands)

    Michail, H.E.; Schinianakis, D.; Goutis, C.E.; Kakarountas, A.P.; Selimis, G.

    2011-01-01

    Message Authentication Codes (MACs) are widely used in order to authenticate data packets, which are transmitted thought networks. Typically MACs are implemented using modules like hash functions and in conjunction with encryption algorithms (like Block Ciphers), which are used to encrypt the

  13. Cryptanalysis on classical cipher based on Indonesian language

    Science.gov (United States)

    Marwati, R.; Yulianti, K.

    2018-05-01

    Cryptanalysis is a process of breaking a cipher in an illegal decryption. This paper discusses about encryption some classic cryptography, breaking substitution cipher and stream cipher, and increasing its security. Encryption and ciphering based on Indonesian Language text. Microsoft Word and Microsoft Excel were chosen as ciphering and breaking tools.

  14. Secure Block Ciphers - Cryptanalysis and Design

    DEFF Research Database (Denmark)

    Tiessen, Tyge

    be applied to the AES can be transferred to this block cipher, albeit with a higher attack complexity. The second publication introduces a new block cipher family which is targeted for new applications in fully homomorphic encryption and multi-party computation. We demonstrate the soundness of the design...... is encrypted using so-called symmetric ciphers. The security of our digital infrastructure thus rests at its very base on their security. The central topic of this thesis is the security of block ciphers – the most prominent form of symmetric ciphers. This thesis is separated in two parts. The first part...... is an introduction to block ciphers and their cryptanalysis, the second part contains publications written and published during the PhD studies. The first publication evaluates the security of a modification of the AES in which the choice of S-box is unknown to the attacker. We find that some of the attacks that can...

  15. Optical stream-cipher-like system for image encryption based on Michelson interferometer.

    Science.gov (United States)

    Yang, Bing; Liu, Zhengjun; Wang, Bo; Zhang, Yan; Liu, Shutian

    2011-01-31

    A novel optical image encryption scheme based on interference is proposed. The original image is digitally encoded into one phase-only mask by employing an improved Gerchberg-Saxton phase retrieval algorithm together with another predefined random phase mask which serves as the encryption key. The decryption process can be implemented optically based on Michelson interferometer by using the same key. The scheme can be regarded as a stream-cipher-like encryption system, the encryption and decryption keys are the same, however the operations are different. The position coordinates and light wavelength can also be used as additional keys during the decryption. Numerical simulations have demonstrated the validity and robustness of the proposed method.

  16. Extended substitution-diffusion based image cipher using chaotic standard map

    Science.gov (United States)

    Kumar, Anil; Ghose, M. K.

    2011-01-01

    This paper proposes an extended substitution-diffusion based image cipher using chaotic standard map [1] and linear feedback shift register to overcome the weakness of previous technique by adding nonlinearity. The first stage consists of row and column rotation and permutation which is controlled by the pseudo-random sequences which is generated by standard chaotic map and linear feedback shift register, second stage further diffusion and confusion is obtained in the horizontal and vertical pixels by mixing the properties of the horizontally and vertically adjacent pixels, respectively, with the help of chaotic standard map. The number of rounds in both stage are controlled by combination of pseudo-random sequence and original image. The performance is evaluated from various types of analysis such as entropy analysis, difference analysis, statistical analysis, key sensitivity analysis, key space analysis and speed analysis. The experimental results illustrate that performance of this is highly secured and fast.

  17. A Distinguish Attack on COSvd Cipher

    OpenAIRE

    Mohammad Ali Orumiehchi ha; R. Mirghadri

    2007-01-01

    The COSvd Ciphers has been proposed by Filiol and others (2004). It is a strengthened version of COS stream cipher family denoted COSvd that has been adopted for at least one commercial standard. We propose a distinguish attack on this version, and prove that, it is distinguishable from a random stream. In the COSvd Cipher used one S-Box (10×8) on the final part of cipher. We focus on S-Box and use weakness this S-Box for distinguish attack. In addition, found a leak on HNLL that the sub s-bo...

  18. Hill Cipher and Least Significant Bit for Image Messaging Security

    Directory of Open Access Journals (Sweden)

    Muhammad Husnul Arif

    2016-02-01

    Full Text Available Exchange of information through cyberspace has many benefits as an example fast estimated time, unlimited physical distance and space limits, etc. But in these activities can also pose a security risk for confidential information. It is necessary for the safety that can be used to protect data transmitted through the Internet. Encryption algorithm that used to encrypt message to be sent (plaintext into messages that have been randomized (ciphertext is cryptography and steganography algorithms. In application of cryptographic techniques that will be used is Hill Cipher. The technique is combined with steganography techniques Least Significant Bit. The result of merging techniques can maintain the confidentiality of messages because people who do not know the secret key used will be difficult to get the message contained in the stego-image and the image that has been inserted can not be used as a cover image. Message successfully inserted and extracted back on all samples with a good image formats * .bmp, * .png , * .jpg at a resolution of 512 x 512 pixels , 256 x 256 pixels. MSE and PSNR results are not influenced file format or file size, but influenced by dimensions of image. The larger dimensions of the image, then the smaller MSE that means error of image gets smaller.

  19. Stream cipher based on pseudorandom number generation using optical affine transformation

    Science.gov (United States)

    Sasaki, Toru; Togo, Hiroyuki; Tanida, Jun; Ichioka, Yoshiki

    2000-07-01

    We propose a new stream cipher technique for 2D image data which can be implemented by iterative optical transformation. The stream cipher uses a pseudo-random number generator (PRNG) to generate pseudo-random bit sequence. The proposed method for the PRNG is composed of iterative operation of 2D affine transformation achieved by optical components, and modulo-n addition of the transformed images. The method is expected to be executed efficiently by optical parallel processing. We verify performance of the proposed method in terms of security strength and clarify problems on optical implementation by the optical fractal synthesizer.

  20. A new feedback image encryption scheme based on perturbation with dynamical compound chaotic sequence cipher generator

    Science.gov (United States)

    Tong, Xiaojun; Cui, Minggen; Wang, Zhu

    2009-07-01

    The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.

  1. A chaotic stream cipher and the usage in video protection

    International Nuclear Information System (INIS)

    Lian Shiguo; Sun Jinsheng; Wang Jinwei; Wang Zhiquan

    2007-01-01

    In this paper, a chaotic stream cipher is constructed and used to encrypt video data selectively. The stream cipher based on a discrete piecewise linear chaotic map satisfies the security requirement of cipher design. The video encryption scheme based on the stream cipher is secure in perception, efficient and format compliant, which is suitable for practical video protection. The video encryption scheme's performances prove the stream cipher's practicability

  2. Periodic Ciphers with Small Blocks and Cryptanalysis of KeeLoq

    DEFF Research Database (Denmark)

    Courtois, Nicolas T.; Bard, Gregory V.; Bogdanov, Andrey

    2008-01-01

    ]. In this paper we study a unique way of attacking KeeLoq, in which the periodic property of KeeLoq is used in to distinguish 512 rounds of KeeLoq from a random permutation. Our attacks require the knowledge of the entire code-book and are not among the fastest attacks known on this cipher. However one of them...

  3. Towards understanding the known-key security of block ciphers

    DEFF Research Database (Denmark)

    Andreeva, Elena; Bogdanov, Andrey; Mennink, Bart

    2014-01-01

    ciphers based on ideal components such as random permutations and random functions as well as propose new generic known-key attacks on generalized Feistel ciphers. We introduce the notion of known-key indifferentiability to capture the security of such block ciphers under a known key. To show its...... meaningfulness, we prove that the known-key attacks on block ciphers with ideal primitives to date violate security under known-key indifferentiability. On the other hand, to demonstrate its constructiveness, we prove the balanced Feistel cipher with random functions and the multiple Even-Mansour cipher...... with random permutations known-key indifferentiable for a sufficient number of rounds. We note that known-key indifferentiability is more quickly and tightly attained by multiple Even-Mansour which puts it forward as a construction provably secure against known-key attacks....

  4. Round Gating for Low Energy Block Ciphers

    DEFF Research Database (Denmark)

    Banik, Subhadeep; Bogdanov, Andrey; Regazzoni, Francesco

    2016-01-01

    design techniques for implementing block ciphers in a low energy fashion. We concentrate on round based implementation and we discuss how gating, applied at round level can affect and improve the energy consumption of the most common lightweight block cipher currently used in the internet of things....... Additionally, we discuss how to needed gating wave can be generated. Experimental results show that our technique is able to reduce the energy consumption in most block ciphers by over 60% while incurring only a minimal overhead in hardware....

  5. Image Encryption Using Stream Cipher Based on Nonlinear Combination Generator with Enhanced Security

    Directory of Open Access Journals (Sweden)

    Belmeguenaï Aîssa

    2013-03-01

    Full Text Available The images are very largely used in our daily life; the security of their transfer became necessary. In this work a novel image encryption scheme using stream cipher algorithm based on nonlinear combination generator is developed. The main contribution of this work is to enhance the security of encrypted image. The proposed scheme is based on the use the several linear feedback shifts registers whose feedback polynomials are primitive and of degrees are all pairwise coprimes combined by resilient function whose resiliency order, algebraic degree and nonlinearity attain Siegenthaler’s and Sarkar, al.’s bounds. This proposed scheme is simple and highly efficient. In order to evaluate performance, the proposed algorithm was measured through a series of tests. These tests included visual test and histogram analysis, key space analysis, correlation coefficient analysis, image entropy, key sensitivity analysis, noise analysis, Berlekamp-Massey attack, correlation attack and algebraic attack. Experimental results demonstrate the proposed system is highly key sensitive, highly resistance to the noises and shows a good resistance against brute-force, statistical attacks, Berlekamp-Massey attack, correlation attack, algebraic attack and a robust system which makes it a potential candidate for encryption of image.

  6. A stream cipher based on a spatiotemporal chaotic system

    International Nuclear Information System (INIS)

    Li Ping; Li Zhong; Halang, Wolfgang A.; Chen Guanrong

    2007-01-01

    A stream cipher based on a spatiotemporal chaotic system is proposed. A one-way coupled map lattice consisting of logistic maps is served as the spatiotemporal chaotic system. Multiple keystreams are generated from the coupled map lattice by using simple algebraic computations, and then are used to encrypt plaintext via bitwise XOR. These make the cipher rather simple and efficient. Numerical investigation shows that the cryptographic properties of the generated keystream are satisfactory. The cipher seems to have higher security, higher efficiency and lower computation expense than the stream cipher based on a spatiotemporal chaotic system proposed recently

  7. Benchmarking Block Ciphers for Wireless Sensor Networks

    NARCIS (Netherlands)

    Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.

    2004-01-01

    Choosing the most storage- and energy-efficient block cipher specifically for wireless sensor networks (WSNs) is not as straightforward as it seems. To our knowledge so far, there is no systematic evaluation framework for the purpose. We have identified the candidates of block ciphers suitable for

  8. Stream ciphers and number theory

    CERN Document Server

    Cusick, Thomas W; Renvall, Ari R

    2004-01-01

    This is the unique book on cross-fertilisations between stream ciphers and number theory. It systematically and comprehensively covers known connections between the two areas that are available only in research papers. Some parts of this book consist of new research results that are not available elsewhere. In addition to exercises, over thirty research problems are presented in this book. In this revised edition almost every chapter was updated, and some chapters were completely rewritten. It is useful as a textbook for a graduate course on the subject, as well as a reference book for researchers in related fields. · Unique book on interactions of stream ciphers and number theory. · Research monograph with many results not available elsewhere. · A revised edition with the most recent advances in this subject. · Over thirty research problems for stimulating interactions between the two areas. · Written by leading researchers in stream ciphers and number theory.

  9. Experiments of 10 Gbit/sec quantum stream cipher applicable to optical Ethernet and optical satellite link

    Science.gov (United States)

    Hirota, Osamu; Ohhata, Kenichi; Honda, Makoto; Akutsu, Shigeto; Doi, Yoshifumi; Harasawa, Katsuyoshi; Yamashita, Kiichi

    2009-08-01

    The security issue for the next generation optical network which realizes Cloud Computing System Service with data center" is urgent problem. In such a network, the encryption by physical layer which provide super security and small delay should be employed. It must provide, however, very high speed encryption because the basic link is operated at 2.5 Gbit/sec or 10 Gbit/sec. The quantum stream cipher by Yuen-2000 protocol (Y-00) is a completely new type random cipher so called Gauss-Yuen random cipher, which can break the Shannon limit for the symmetric key cipher. We develop such a cipher which has good balance of the security, speed and cost performance. In SPIE conference on quantum communication and quantum imaging V, we reported a demonstration of 2.5 Gbit/sec system for the commercial link and proposed how to improve it to 10 Gbit/sec. This paper reports a demonstration of the Y-00 cipher system which works at 10 Gbit/sec. A transmission test in a laboratory is tried to get the basic data on what parameters are important to operate in the real commercial networks. In addition, we give some theoretical results on the security. It is clarified that the necessary condition to break the Shannon limit requires indeed the quantum phenomenon, and that the full information theoretically secure system is available in the satellite link application.

  10. Cryptanalysis of the full Spritz stream cipher

    DEFF Research Database (Denmark)

    Banik, Subhadeep; Isobe, Takanori

    2016-01-01

    Spritz is a stream cipher proposed by Rivest and Schuldt at the rump session of CRYPTO 2014. It is intended to be a replacement of the popular RC4 stream cipher. In this paper we propose distinguishing attacks on the full Spritz, based on a short-term bias in the first two bytes of a keystream an...

  11. On-line Ciphers and the Hash-CBC Constructions

    DEFF Research Database (Denmark)

    Bellare, M.; Boldyreva, A.; Knudsen, Lars Ramkilde

    2012-01-01

    We initiate a study of on-line ciphers. These are ciphers that can take input plaintexts of large and varying lengths and will output the i th block of the ciphertext after having processed only the first i blocks of the plaintext. Such ciphers permit length-preserving encryption of a data stream...... with only a single pass through the data. We provide security definitions for this primitive and study its basic properties. We then provide attacks on some possible candidates, including CBC with fixed IV. We then provide two constructions, HCBC1 and HCBC2, based on a given block cipher E and a family...... of computationally AXU functions. HCBC1 is proven secure against chosen-plaintext attacks assuming that E is a PRP secure against chosen-plaintext attacks, while HCBC2 is proven secure against chosen-ciphertext attacks assuming that E is a PRP secure against chosen-ciphertext attacks....

  12. Block cipher based on modular arithmetic and methods of information compression

    Science.gov (United States)

    Krendelev, S.; Zbitnev, N.; Shishlyannikov, D.; Gridin, D.

    2017-10-01

    The article focuses on the description of a new block cipher. Due to the heightened interest in BigData the described cipher is used to encrypt big volumes of data in cloud storage services. The main advantages of the given cipher are the ease of implementation and the possibility of probabilistic encryption. This means that the text encryption will be different when the key is the same and the data is the same. So, the strength of the encryption is improved. Additionally, the ciphered message size can be hardly predicted.

  13. Cryptanalysis of PRESENT-like ciphers with secret S-boxes

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Leander, Gregor

    2011-01-01

    At Eurocrypt 2001, Biryukov and Shamir investigated the security of AES-like ciphers where the substitutions and affine transformations are all key-dependent and successfully cryptanalysed two and a half rounds. This paper considers PRESENT-like ciphers in a similar manner. We focus on the settings...... where the S-boxes are key dependent, and repeated for every round. We break one particular variant which was proposed in 2009 with practical complexity in a chosen plaintext/chosen ciphertext scenario. Extrapolating these results suggests that up to 28 rounds of such ciphers can be broken. Furthermore...

  14. Benchmarking Block Ciphers for Wireless Sensor Networks (Extended Abstract)

    NARCIS (Netherlands)

    Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.

    2004-01-01

    Choosing the most storage- and energy-efficient block cipher specifically for wireless sensor networks (WSNs) is not as straightforward as it seems. To our knowledge so far, there is no systematic evaluation framework for the purpose. We have identified the candidates of block ciphers suitable for

  15. Ganzua: A Cryptanalysis Tool for Monoalphabetic and Polyalphabetic Ciphers

    Science.gov (United States)

    Garcia-Pasquel, Jesus Adolfo; Galaviz, Jose

    2006-01-01

    Many introductory courses to cryptology and computer security start with or include a discussion of classical ciphers that usually contemplates some cryptanalysis techniques used to break them. Ganzua (picklock in Spanish) is an application designed to assist the cryptanalysis of ciphertext obtained with monoalphabetic or polyalphabetic ciphers.…

  16. Performance evaluation of Grain family and Espresso ciphers for applications on resource constrained devices

    Directory of Open Access Journals (Sweden)

    Subhrajyoti Deb

    2018-03-01

    Full Text Available A secure stream cipher is an effective security solution for applications running on resource-constrained devices. The Grain family of stream ciphers (Grain v1, Grain-128, and Grain-128a is a family of stream ciphers designed for low-end devices. Similarly, Espresso is a lightweight stream cipher that was developed recently for 5G wireless mobile communication. The randomness of the keystream produced by a stream cipher is a good indicator of its security strength. In this study, we have analyzed the randomness properties of the keystreams produced by both the Grain Family and Espresso ciphers using the statistical packages DieHarder and NIST STS. We also analyzed their performances in two constrained devices (ATmega328P and ESP8266 based on three attainable parameters, namely computation time, memory, and power consumption. Keywords: Stream cipher, Randomness, Dieharder, NIST STS

  17. Efficient configurations for block ciphers with unified ENC/DEC paths

    DEFF Research Database (Denmark)

    Banik, Subhadeep; Bogdanov, Andrey; Regazzoni, Francesco

    2017-01-01

    by analyzing 12 circuit configurations for the Advanced Encryption Standard (AES-128) cipher and establish some design rules for energy efficiency. We then extend our analysis to several lightweight block ciphers. In the second part of the paper we also investigate area optimized circuits for combined......Block Ciphers providing the combined functionalities of encryption and decryption are required to operate in modes of operation like CBC and ELmD. Hence such architectures form critical building blocks for secure cryptographic implementations. Depending on the algebraic structure of a given cipher......, there may be multiple ways of constructing the combined encryption/decryption circuit, each targeted at optimizing lightweight design metrics like area or power etc. In this paper we look at how the choice of circuit configuration affects the energy required to perform one encryption/decryption. We begin...

  18. A MAC Mode for Lightweight Block Ciphers

    DEFF Research Database (Denmark)

    Luykx, Atul; Preneel, Bart; Tischhauser, Elmar Wolfgang

    2016-01-01

    Lightweight cryptography strives to protect communication in constrained environments without sacrificing security. However, security often conflicts with efficiency, shown by the fact that many new lightweight block cipher designs have block sizes as low as 64 or 32 bits. Such low block sizes lead...... no effect on the security bound, allowing an order of magnitude more data to be processed per key. Furthermore, LightMAC is incredibly simple, has almost no overhead over the block cipher, and is parallelizable. As a result, LightMAC not only offers compact authentication for resource-constrained platforms...

  19. Survey and Benchmark of Block Ciphers for Wireless Sensor Networks

    NARCIS (Netherlands)

    Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.

    Choosing the most storage- and energy-efficient block cipher specifically for wireless sensor networks (WSNs) is not as straightforward as it seems. To our knowledge so far, there is no systematic evaluation framework for the purpose. In this paper, we have identified the candidates of block ciphers

  20. Fruit-80: A Secure Ultra-Lightweight Stream Cipher for Constrained Environments

    Directory of Open Access Journals (Sweden)

    Vahid Amin Ghafari

    2018-03-01

    Full Text Available In Fast Software Encryption (FSE 2015, while presenting a new idea (i.e., the design of stream ciphers with the small internal state by using a secret key, not only in the initialization but also in the keystream generation, Sprout was proposed. Sprout was insecure and an improved version of Sprout was presented in FSE 2017. We introduced Fruit stream cipher informally in 2016 on the web page of IACR (eprint and few cryptanalysis were published on it. Fortunately, the main structure of Fruit was resistant. Now, Fruit-80 is presented as a final version which is easier to implement and is secure. The size of LFSR and NFSR in Fruit-80 is only 80 bits (for 80-bit security level, while for resistance to the classical time-memory-data tradeoff (TMDTO attacks, the internal state size should be at least twice that of the security level. To satisfy this rule and to design a concrete cipher, we used some new design ideas. It seems that the bottleneck of designing an ultra-lightweight stream cipher is TMDTO distinguishing attacks. A countermeasure was suggested, and another countermeasure is proposed here. Fruit-80 is better than other small-state stream ciphers in terms of the initialization speed and area size in hardware. It is possible to redesign many of the stream ciphers and achieve significantly smaller area size by using the new idea.

  1. Stealthy Hardware Trojan Based Algebraic Fault Analysis of HIGHT Block Cipher

    Directory of Open Access Journals (Sweden)

    Hao Chen

    2017-01-01

    Full Text Available HIGHT is a lightweight block cipher which has been adopted as a standard block cipher. In this paper, we present a bit-level algebraic fault analysis (AFA of HIGHT, where the faults are perturbed by a stealthy HT. The fault model in our attack assumes that the adversary is able to insert a HT that flips a specific bit of a certain intermediate word of the cipher once the HT is activated. The HT is realized by merely 4 registers and with an extremely low activation rate of about 0.000025. We show that the optimal location for inserting the designed HT can be efficiently determined by AFA in advance. Finally, a method is proposed to represent the cipher and the injected faults with a merged set of algebraic equations and the master key can be recovered by solving the merged equation system with an SAT solver. Our attack, which fully recovers the secret master key of the cipher in 12572.26 seconds, requires three times of activation on the designed HT. To the best of our knowledge, this is the first Trojan attack on HIGHT.

  2. Cryptanalysis of a chaotic block cipher with external key and its improved version

    International Nuclear Information System (INIS)

    Li Chengqing; Li Shujun; Alvarez, Gonzalo; Chen Guanrong; Lo, K.-T.

    2008-01-01

    Recently, Pareek et al. proposed a symmetric key block cipher using multiple one-dimensional chaotic maps. This paper reports some new findings on the security problems of this kind of chaotic cipher: (1) a number of weak keys exist; (2) some important intermediate data of the cipher are not sufficiently random; (3) the whole secret key can be broken by a known-plaintext attack with only 120 consecutive known plain-bytes in one known plaintext. In addition, it is pointed out that an improved version of the chaotic cipher proposed by Wei et al. still suffers from all the same security defects

  3. Counting equations in algebraic attacks on block ciphers

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Miolane, Charlotte Vikkelsø

    2010-01-01

    This paper is about counting linearly independent equations for so-called algebraic attacks on block ciphers. The basic idea behind many of these approaches, e.g., XL, is to generate a large set of equations from an initial set of equations by multiplication of existing equations by the variables...... in the system. One of the most difficult tasks is to determine the exact number of linearly independent equations one obtain in the attacks. In this paper, it is shown that by splitting the equations defined over a block cipher (an SP-network) into two sets, one can determine the exact number of linearly...... independent equations which can be generated in algebraic attacks within each of these sets of a certain degree. While this does not give us a direct formula for the success of algebraic attacks on block ciphers, it gives some interesting bounds on the number of equations one can obtain from a given block...

  4. Why IV Setup for Stream Ciphers is Difficult

    DEFF Research Database (Denmark)

    Zenner, Erik

    2007-01-01

    In recent years, the initialization vector (IV) setup has proven to be the most vulnerable point when designing secure stream ciphers. In this paper, we take a look at possible reasons why this is the case, identifying numerous open research problems in cryptography.......In recent years, the initialization vector (IV) setup has proven to be the most vulnerable point when designing secure stream ciphers. In this paper, we take a look at possible reasons why this is the case, identifying numerous open research problems in cryptography....

  5. The Rabbit Stream Cipher

    DEFF Research Database (Denmark)

    Boesgaard, Martin; Vesterager, Mette; Zenner, Erik

    2008-01-01

    The stream cipher Rabbit was first presented at FSE 2003, and no attacks against it have been published until now. With a measured encryption/decryption speed of 3.7 clock cycles per byte on a Pentium III processor, Rabbit does also provide very high performance. This paper gives a concise...... description of the Rabbit design and some of the cryptanalytic results available....

  6. Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows

    DEFF Research Database (Denmark)

    Beierle, Christof; Jovanovic, Philipp; Lauridsen, Martin Mehl

    2015-01-01

    Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted...... by the operation resembling ShiftRows has never been studied in depth. Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear...... normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry...

  7. HYBRID CHRIPTOGRAPHY STREAM CIPHER AND RSA ALGORITHM WITH DIGITAL SIGNATURE AS A KEY

    Directory of Open Access Journals (Sweden)

    Grace Lamudur Arta Sihombing

    2017-03-01

    Full Text Available Confidentiality of data is very important in communication. Many cyber crimes that exploit security holes for entry and manipulation. To ensure the security and confidentiality of the data, required a certain technique to encrypt data or information called cryptography. It is one of the components that can not be ignored in building security. And this research aimed to analyze the hybrid cryptography with symmetric key by using a stream cipher algorithm and asymmetric key by using RSA (Rivest Shamir Adleman algorithm. The advantages of hybrid cryptography is the speed in processing data using a symmetric algorithm and easy transfer of key using asymmetric algorithm. This can increase the speed of transaction processing data. Stream Cipher Algorithm using the image digital signature as a keys, that will be secured by the RSA algorithm. So, the key for encryption and decryption are different. Blum Blum Shub methods used to generate keys for the value p, q on the RSA algorithm. It will be very difficult for a cryptanalyst to break the key. Analysis of hybrid cryptography stream cipher and RSA algorithms with digital signatures as a key, indicates that the size of the encrypted file is equal to the size of the plaintext, not to be larger or smaller so that the time required for encryption and decryption process is relatively fast.

  8. A MULTICORE COMPUTER SYSTEM FOR DESIGN OF STREAM CIPHERS BASED ON RANDOM FEEDBACK

    Directory of Open Access Journals (Sweden)

    Borislav BEDZHEV

    2013-01-01

    Full Text Available The stream ciphers are an important tool for providing information security in the present communication and computer networks. Due to this reason our paper describes a multicore computer system for design of stream ciphers based on the so - named random feedback shift registers (RFSRs. The interest to this theme is inspired by the following facts. First, the RFSRs are a relatively new type of stream ciphers which demonstrate a significant enhancement of the crypto - resistance in a comparison with the classical stream ciphers. Second, the studding of the features of the RFSRs is in very initial stage. Third, the theory of the RFSRs seems to be very hard, which leads to the necessity RFSRs to be explored mainly by the means of computer models. The paper is organized as follows. First, the basics of the RFSRs are recalled. After that, our multicore computer system for design of stream ciphers based on RFSRs is presented. Finally, the advantages and possible areas of application of the computer system are discussed.

  9. A new block cipher based on chaotic map and group theory

    International Nuclear Information System (INIS)

    Yang Huaqian; Liao Xiaofeng; Wong Kwokwo; Zhang Wei; Wei Pengcheng

    2009-01-01

    Based on the study of some existing chaotic encryption algorithms, a new block cipher is proposed. In the proposed cipher, two sequences of decimal numbers individually generated by two chaotic piecewise linear maps are used to determine the noise vectors by comparing the element of the two sequences. Then a sequence of decimal numbers is used to define a bijection map. The modular multiplication operation in the group Z 2 8 +1 * and permutations are alternately applied on plaintext with block length of multiples of 64 bits to produce ciphertext blocks of the same length. Analysis show that the proposed block cipher does not suffer from the flaws of pure chaotic cryptosystems.

  10. Coherent pulse position modulation quantum cipher

    Energy Technology Data Exchange (ETDEWEB)

    Sohma, Masaki; Hirota, Osamu [Quantum ICT Research Institute, Tamagawa University, 6-1-1 Tamagawa-gakuen, Machida, Tokyo 194-8610 (Japan)

    2014-12-04

    On the basis of fundamental idea of Yuen, we present a new type of quantum random cipher, where pulse position modulated signals are encrypted in the picture of quantum Gaussian wave form. We discuss the security of our proposed system with a phase mask encryption.

  11. Hardware Implementation of Artificial Neural Network for Data Ciphering

    Directory of Open Access Journals (Sweden)

    Sahar L. Kadoory

    2016-10-01

    Full Text Available This paper introduces the design and realization of multiple blocks ciphering techniques on the FPGA (Field Programmable Gate Arrays. A back propagation neural networks have been built for substitution, permutation and XOR blocks ciphering using Neural Network Toolbox in MATLAB program. They are trained to encrypt the data, after obtaining the suitable weights, biases, activation function and layout. Afterward, they are described using VHDL and implemented using Xilinx Spartan-3E FPGA using two approaches: serial and parallel versions. The simulation results obtained with Xilinx ISE 9.2i software. The numerical precision is chosen carefully when implementing the Neural Network on FPGA. Obtained results from the hardware designs show accurate numeric values to cipher the data. As expected, the synthesis results indicate that the serial version requires less area resources than the parallel version. As, the data throughput in parallel version is higher than the serial version in rang between (1.13-1.5 times. Also, a slight difference can be observed in the maximum frequency.

  12. An Enhanced Vigenere Cipher For Data Security

    Directory of Open Access Journals (Sweden)

    Aized Amin Soofi

    2015-08-01

    Full Text Available In todays world the amount of data that is exchanged has increased in the last few years so securing the information has become a crucial task. Cryptography is an art of converting plain text message into unreadable message. Encryption algorithms play an important role in information security systems. Encryption is considered as one of the most powerful tool for secure transmission of data over the communication network. Vigenere technique is an example of polyalphabetic stream cipher it has various limitations such as Kasiski and Friedman attack to find the length of encryption key. In this paper an enhanced version of traditional vigenere cipher has been proposed that eliminates the chances of Kaisiski and Friedman attack. Proposed technique also provides better security against cryptanalysis and pattern prediction.

  13. Investigating the structure preserving encryption of high efficiency video coding (HEVC)

    Science.gov (United States)

    Shahid, Zafar; Puech, William

    2013-02-01

    This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.

  14. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    Science.gov (United States)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  15. Codes, Ciphers, and Cryptography--An Honors Colloquium

    Science.gov (United States)

    Karls, Michael A.

    2010-01-01

    At the suggestion of a colleague, I read "The Code Book", [32], by Simon Singh to get a basic introduction to the RSA encryption scheme. Inspired by Singh's book, I designed a Ball State University Honors Colloquium in Mathematics for both majors and non-majors, with material coming from "The Code Book" and many other sources. This course became…

  16. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  17. Observations on the SIMON Block Cipher Family

    DEFF Research Database (Denmark)

    Kölbl, Stefan; Leander, Gregor; Tiessen, Tyge

    2015-01-01

    In this paper we analyse the general class of functions underlying the Simon block cipher. In particular, we derive efficiently computable and easily implementable expressions for the exact differential and linear behaviour of Simon-like round functions. Following up on this, we use those...

  18. From Greeks to Today: Cipher Trees and Computer Cryptography.

    Science.gov (United States)

    Grady, M. Tim; Brumbaugh, Doug

    1988-01-01

    Explores the use of computers for teaching mathematical models of transposition ciphers. Illustrates the ideas, includes activities and extensions, provides a mathematical model and includes computer programs to implement these topics. (MVL)

  19. Exploring Energy Efficiency of Lightweight Block Ciphers

    DEFF Research Database (Denmark)

    Banik, Subhadeep; Bogdanov, Andrey; Regazzoni, Francesco

    2016-01-01

    is the encryption of one plaintext. By studying the energy consumption model of a CMOS gate, we arrive at the conclusion that the energy consumed per cycle during the encryption operation of an r-round unrolled architecture of any block cipher is a quadratic function in r. We then apply our model to 9 well known...

  20. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    Science.gov (United States)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  1. Cryptanalysis of an ergodic chaotic cipher

    International Nuclear Information System (INIS)

    Alvarez, G.; Montoya, F.; Romera, M.; Pastor, G.

    2003-01-01

    In recent years, a growing number of cryptosystems based on chaos have been proposed, many of them fundamentally flawed by a lack of robustness and security. In this Letter, we offer our results after having studied the security and possible attacks on a very interesting cipher algorithm based on the logistic map's ergodicity property. This algorithm has become very popular recently, as it has been taken as the development basis of new chaotic cryptosystems

  2. Implementation of Rivest Shamir Adleman Algorithm (RSA) and Vigenere Cipher In Web Based Information System

    Science.gov (United States)

    Aryanti, Aryanti; Mekongga, Ikhthison

    2018-02-01

    Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA) and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA) and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  3. Ultrasound imaging using coded signals

    DEFF Research Database (Denmark)

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...

  4. High dynamic range coding imaging system

    Science.gov (United States)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  5. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    Science.gov (United States)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  6. Implementation of Rivest Shamir Adleman Algorithm (RSA and Vigenere Cipher In Web Based Information System

    Directory of Open Access Journals (Sweden)

    Aryanti Aryanti

    2018-01-01

    Full Text Available Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  7. McBits: fast constant-time code-based cryptography

    NARCIS (Netherlands)

    Bernstein, D.J.; Chou, T.; Schwabe, P.

    2015-01-01

    This paper presents extremely fast algorithms for code-based public-key cryptography, including full protection against timing attacks. For example, at a 2^128 security level, this paper achieves a reciprocal decryption throughput of just 60493 cycles (plus cipher cost etc.) on a single Ivy Bridge

  8. Novel Quantum Encryption Algorithm Based on Multiqubit Quantum Shift Register and Hill Cipher

    International Nuclear Information System (INIS)

    Khalaf, Rifaat Zaidan; Abdullah, Alharith Abdulkareem

    2014-01-01

    Based on a quantum shift register, a novel quantum block cryptographic algorithm that can be used to encrypt classical messages is proposed. The message is encoded and decoded by using a code generated by the quantum shift register. The security of this algorithm is analysed in detail. It is shown that, in the quantum block cryptographic algorithm, two keys can be used. One of them is the classical key that is used in the Hill cipher algorithm where Alice and Bob use the authenticated Diffie Hellman key exchange algorithm using the concept of digital signature for the authentication of the two communicating parties and so eliminate the man-in-the-middle attack. The other key is generated by the quantum shift register and used for the coding of the encryption message, where Alice and Bob share the key by using the BB84 protocol. The novel algorithm can prevent a quantum attack strategy as well as a classical attack strategy. The problem of key management is discussed and circuits for the encryption and the decryption are suggested

  9. On the (In)Equivalence of Impossible Differential and Zero-Correlation Distinguishers for Feistel- and Skipjack-Type Ciphers

    DEFF Research Database (Denmark)

    Blondeau, Celine; Bogdanov, Andrey; Wang, Meiqin

    2014-01-01

    or inequivalence has not been formally addressed so far in a constructive practical way.In this paper, we aim to bridge this gap in the understanding of the links between the ID and ZC properties. We tackle this problem at the example of two wide classes of ciphers, namely, Feistel- and Skipjack-type ciphers...

  10. Tree Coding of Bilevel Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...... images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding...

  11. Survey Of Lossless Image Coding Techniques

    Science.gov (United States)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  12. Survey and Benchmark of Block Ciphers for Wireless Sensor Networks

    NARCIS (Netherlands)

    Law, Y.W.; Doumen, J.M.; Hartel, Pieter H.

    Cryptographic algorithms play an important role in the security architecture of wireless sensor networks (WSNs). Choosing the most storage- and energy-efficient block cipher is essential, due to the facts that these networks are meant to operate without human intervention for a long period of time

  13. An implementation of super-encryption using RC4A and MDTM cipher algorithms for securing PDF Files on android

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.

    2018-03-01

    MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.

  14. Hybrid Message-Embedded Cipher Using Logistic Map

    OpenAIRE

    Mishra, Mina; Mankar, V. H.

    2012-01-01

    The proposed hybrid message embedded scheme consists of hill cipher combined with message embedded chaotic scheme. Message-embedded scheme using non-linear feedback shift register as non-linear function and 1-D logistic map as chaotic map is modified, analyzed and tested for avalanche property and strength against known plaintext attack and brute-force attack. Parameter of logistic map acts as a secret key. As we know that the minimum key space to resist brute-force attack is 2100, and it is ...

  15. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  16. Approximation of a chaotic orbit as a cryptanalytical method on Baptista's cipher

    International Nuclear Information System (INIS)

    Skrobek, Adrian

    2008-01-01

    Many cryptographic schemes based on M.S. Baptista algorithm were created. The original algorithm and some of the versions that based upon it were put to test with various cryptanalytic techniques. This Letter shows the new approach to Baptista's cipher cryptanalysis. The presumption is that the attacker knows the mapping in between the characters of the plaintext and the numbers of the ε-interval. Then, depending on the amount of the knowledge about the key possessed, the estimation of all components of the key requires a different computational complexity, however it is possible. This Letter also takes into consideration, independently, all the components of the key from the M.S. Baptista's original algorithm. The main aim is the use of the approximation of the blurred chaotic orbit's real value in Baptista-type cipher cryptanalysis

  17. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  18. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  19. Balanced distributed coding of omnidirectional images

    Science.gov (United States)

    Thirumalai, Vijayaraghavan; Tosic, Ivana; Frossard, Pascal

    2008-01-01

    This paper presents a distributed coding scheme for the representation of 3D scenes captured by stereo omni-directional cameras. We consider a scenario where images captured from two different viewpoints are encoded independently, with a balanced rate distribution among the different cameras. The distributed coding is built on multiresolution representation and partitioning of the visual information in each camera. The encoder transmits one partition after entropy coding, as well as the syndrome bits resulting from the channel encoding of the other partition. The decoder exploits the intra-view correlation and attempts to reconstruct the source image by combination of the entropy-coded partition and the syndrome information. At the same time, it exploits the inter-view correlation using motion estimation between images from different cameras. Experiments demonstrate that the distributed coding solution performs better than a scheme where images are handled independently, and that the coding rate stays balanced between encoders.

  20. Key Recovery Attacks on Recent Authenticated Ciphers

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Dobraunig, Christoph; Eichlseder, Maria

    2014-01-01

    In this paper, we cryptanalyze three authenticated ciphers: AVALANCHE, Calico, and RBS. While the former two are contestants in the ongoing international CAESAR competition for authenticated encryption schemes, the latter has recently been proposed for lightweight applications such as RFID systems...... and wireless networks. All these schemes use well-established and secure components such as the AES, Grain-like NFSRs, ChaCha and SipHash as their building blocks. However, we discover key recovery attacks for all three designs, featuring square-root complexities. Using a key collision technique, we can...

  1. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  2. Fides: Lightweight Authenticated Cipher with Side-Channel Resistance for Constrained Hardware

    DEFF Research Database (Denmark)

    Bilgin, Begul; Bogdanov, Andrey; Knezevic, Miroslav

    2013-01-01

    In this paper, we present a novel lightweight authenticated cipher optimized for hardware implementations called Fides. It is an online nonce-based authenticated encryption scheme with authenticated data whose area requirements are as low as 793 GE and 1001 GE for 80-bit and 96-bit security...

  3. Advanced Imaging Optics Utilizing Wavefront Coding.

    Energy Technology Data Exchange (ETDEWEB)

    Scrymgeour, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boye, Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adelsberger, Kathleen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  4. Fast encryption of image data using chaotic Kolmogorov flows

    Science.gov (United States)

    Scharinger, Josef

    1998-04-01

    To guarantee security and privacy in image and video archival applications, efficient bulk encryption techniques are necessary which are easily implementable in soft- and hardware and are able to cope with the vast amounts of data involved. Experience has shown that block-oriented symmetric product ciphers constitute an adequate design paradigm for resolving this task, since they can offer a very high level of security as well as very high encryption rates. In this contribution we introduce a new product cipher which encrypts large blocks of plain text by repeated intertwined application of substitution and permutation operations. While almost all of the current product ciphers use fixed permutation operations on small data blocks, our approach involves parametrizable permutations on large data blocks induced by specific chaotic systems. By combining these highly unstable dynamics with an adaption of a very fast shift register based pseudo-random number generator, we obtain a new class of computationally secure product ciphers which offer many features that make them superior to contemporary bulk encryption systems when aiming at efficient image and video data encryption.

  5. Survey of coded aperture imaging

    International Nuclear Information System (INIS)

    Barrett, H.H.

    1975-01-01

    The basic principle and limitations of coded aperture imaging for x-ray and gamma cameras are discussed. Current trends include (1) use of time varying apertures, (2) use of ''dilute'' apertures with transmission much less than 50%, and (3) attempts to derive transverse tomographic sections, unblurred by other planes, from coded images

  6. Fractal Image Coding with Digital Watermarks

    Directory of Open Access Journals (Sweden)

    Z. Klenovicova

    2000-12-01

    Full Text Available In this paper are presented some results of implementation of digitalwatermarking methods into image coding based on fractal principles. Thepaper focuses on two possible approaches of embedding digitalwatermarks into fractal code of images - embedding digital watermarksinto parameters for position of similar blocks and coefficients ofblock similarity. Both algorithms were analyzed and verified on grayscale static images.

  7. Comparing the Cost of Protecting Selected Lightweight Block Ciphers against Differential Power Analysis in Low-Cost FPGAs

    Directory of Open Access Journals (Sweden)

    William Diehl

    2018-04-01

    Full Text Available Lightweight block ciphers are an important topic in the Internet of Things (IoT since they provide moderate security while requiring fewer resources than the Advanced Encryption Standard (AES. Ongoing cryptographic contests and standardization efforts evaluate lightweight block ciphers on their resistance to power analysis side channel attack (SCA, and the ability to apply countermeasures. While some ciphers have been individually evaluated, a large-scale comparison of resistance to side channel attack and the formulation of absolute and relative costs of implementing countermeasures is difficult, since researchers typically use varied architectures, optimization strategies, technologies, and evaluation techniques. In this research, we leverage the Test Vector Leakage Assessment (TVLA methodology and the FOBOS SCA framework to compare FPGA implementations of AES, SIMON, SPECK, PRESENT, LED, and TWINE, using a choice of architecture targeted to optimize throughput-to-area (TP/A ratio and suitable for introducing countermeasures to Differential Power Analysis (DPA. We then apply an equivalent level of protection to the above ciphers using 3-share threshold implementations (TI and verify the improved resistance to DPA. We find that SIMON has the highest absolute TP/A ratio of protected versions, as well as the lowest relative cost of protection in terms of TP/A ratio. Additionally, PRESENT uses the least energy per bit (E/bit of all protected implementations, while AES has the lowest relative cost of protection in terms of increased E/bit.

  8. Coding and transmission of subband coded images on the Internet

    Science.gov (United States)

    Wah, Benjamin W.; Su, Xiao

    2001-09-01

    Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

  9. Performance of Сellular Automata-based Stream Ciphers in GPU Implementation

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2016-01-01

    Full Text Available Earlier the author had developed methods to build high-performance generalized cellular automata-based symmetric ciphers, which allow obtaining the encryption algorithms that show extremely high performance in hardware implementation. However, their implementation based on the conventional microprocessors lacks high performance. The mere fact is quite common - it shows a scope of applications for these ciphers. Nevertheless, the use of graphic processors enables achieving an appropriate performance for a software implementation.The article is extension of a series of the articles, which study various aspects to construct and implement cryptographic algorithms based on the generalized cellular automata. The article is aimed at studying the capabilities to implement the GPU-based cryptographic algorithms under consideration.Representing a key generator, the implemented encryption algorithm comprises 2k generalized cellular automata. The cellular automata graphs are Ramanujan’s ones. The cells of produced k gamma streams alternate, thereby allowing the GPU capabilities to be better used.To implement was used OpenCL, as the most universal and platform-independent API. The software written in C ++ was designed so that the user could set various parameters, including the encryption key, the graph structure, the local communication function, various constants, etc. To test were used a variety of graphics processors (NVIDIA GTX 650; NVIDIA GTX 770; AMD R9 280X.Depending on operating conditions, and GPU used, a performance range is from 0.47 to 6.61 Gb / s, which is comparable to the performance of the countertypes.Thus, the article has demonstrated that using the GPU makes it is possible to provide efficient software implementation of stream ciphers based on the generalized cellular automata.This work was supported by the RFBR, the project №16-07-00542.

  10. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  11. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  12. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  13. Progress in Y-00 physical cipher for Giga bit/sec optical data communications (intensity modulation method)

    Science.gov (United States)

    Hirota, Osamu; Futami, Fumio

    2014-10-01

    To guarantee a security of Cloud Computing System is urgent problem. Although there are several threats in a security problem, the most serious problem is cyber attack against an optical fiber transmission among data centers. In such a network, an encryption scheme on Layer 1(physical layer) with an ultimately strong security, a small delay, and a very high speed should be employed, because a basic optical link is operated at 10 Gbit/sec/wavelength. We have developed a quantum noise randomied stream cipher so called Yuen- 2000 encryption scheme (Y-00) during a decade. This type of cipher is a completely new type random cipher in which ciphertext for a legitimate receiver and eavesdropper are different. This is a condition to break the Shannon limit in theory of cryptography. In addition, this scheme has a good balance on a security, a speed and a cost performance. To realize such an encryption, several modulation methods are candidates such as phase-modulation, intensity-modulation, quadrature amplitude modulation, and so on. Northwestern university group demonstrated a phase modulation system (α=η) in 2003. In 2005, we reported a demonstration of 1 Gbit/sec system based on intensity modulation scheme(ISK-Y00), and gave a design method for quadratic amplitude modulation (QAM-Y00) in 2005 and 2010. An intensity modulation scheme promises a real application to a secure fiber communication of current data centers. This paper presents a progress in quantum noise randomized stream cipher based on ISK-Y00, integrating our theoretical and experimental achievements in the past and recent 100 Gbit/sec(10Gbit/sec × 10 wavelengths) experiment.

  14. Cryptanalysis of Lin et al.'s Efficient Block-Cipher-Based Hash Function

    NARCIS (Netherlands)

    Liu, Bozhong; Gong, Zheng; Chen, Xiaohong; Qiu, Weidong; Zheng, Dong

    2010-01-01

    Hash functions are widely used in authentication. In this paper, the security of Lin et al.'s efficient block-cipher-based hash function is reviewed. By using Joux's multicollisions and Kelsey et al.'s expandable message techniques, we find the scheme is vulnerable to collision, preimage and second

  15. Toward privacy-preserving JPEG image retrieval

    Science.gov (United States)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  16. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  17. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  18. Future trends in image coding

    Science.gov (United States)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  19. Coding aperture applied to X-ray imaging

    International Nuclear Information System (INIS)

    Brunol, J.; Sauneuf, R.; Gex, J.P.

    1980-05-01

    We present some X-ray images of grids and plasmas. These images were obtained by using a single circular slit (annular code) as coding aperture and a computer decoding process. The experimental resolution is better than 10μm and it is expected to be in the order of 2 or 3 μm with the same code and an improved decoding process

  20. A chaos-based digital image encryption scheme with an improved diffusion strategy.

    Science.gov (United States)

    Fu, Chong; Chen, Jun-jie; Zou, Hao; Meng, Wei-hong; Zhan, Yong-feng; Yu, Ya-wen

    2012-01-30

    Chaos-based image cipher has been widely investigated over the last decade or so to meet the increasing demand for real-time secure image transmission over public networks. In this paper, an improved diffusion strategy is proposed to promote the efficiency of the most widely investigated permutation-diffusion type image cipher. By using the novel bidirectional diffusion strategy, the spreading process is significantly accelerated and hence the same level of security can be achieved with fewer overall encryption rounds. Moreover, to further enhance the security of the cryptosystem, a plain-text related chaotic orbit turbulence mechanism is introduced in diffusion procedure by perturbing the control parameter of the employed chaotic system according to the cipher-pixel. Extensive cryptanalysis has been performed on the proposed scheme using differential analysis, key space analysis, various statistical analyses and key sensitivity analysis. Results of our analyses indicate that the new scheme has a satisfactory security level with a low computational complexity, which renders it a good candidate for real-time secure image transmission applications.

  1. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  2. Estimating the Probabilities of Low-Weight Differential and Linear Approximations on PRESENT-like Ciphers

    DEFF Research Database (Denmark)

    Abdelraheem, Mohamed Ahmed

    2012-01-01

    We use large but sparse correlation and transition-difference-probability submatrices to find the best linear and differential approximations respectively on PRESENT-like ciphers. This outperforms the branch and bound algorithm when the number of low-weight differential and linear characteristics...

  3. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  4. Subband coding for image data archiving

    Science.gov (United States)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  5. Coding of Depth Images for 3DTV

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper a brief overview of the topic of coding and compression of depth images for multi-view image and video coding is provided. Depth images represent a convenient way to describe distances in the 3D scene, useful for 3D video processing purposes. Standard approaches...... for the compression of depth images are described and compared against some recent specialized algorithms able to achieve higher compression performances. Future research directions close the paper....

  6. X-ray image coding

    International Nuclear Information System (INIS)

    1974-01-01

    The invention aims at decreasing the effect of stray radiation in X-ray images. This is achieved by putting a plate between source and object with parallel zones of alternating high and low absorption coefficients for X-radiation. The image is scanned with the help of electronic circuits which decode the signal space coded by the plate, thus removing the stray radiation

  7. Field performance of timber bridges. 17, Ciphers stress-laminated deck bridge

    Science.gov (United States)

    James P. Wacker; James A. Kainz; Michael A. Ritter

    In September 1989, the Ciphers bridge was constructed within the Beltrami Island State Forest in Roseau County, Minnesota. The bridge superstructure is a two-span continuous stress-laminated deck that is approximately 12.19 m long, 5.49 m wide, and 305 mm deep (40 ft long, 18 ft wide, and 12 in. deep). The bridge is one of the first to utilize red pine sawn lumber for...

  8. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  9. Implementation of Super-Encryption with Trithemius Algorithm and Double Transposition Cipher in Securing PDF Files on Android Platform

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Jessica

    2018-03-01

    This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.

  10. Chaotic Image Encryption Algorithm Based on Circulant Operation

    Directory of Open Access Journals (Sweden)

    Xiaoling Huang

    2013-01-01

    Full Text Available A novel chaotic image encryption scheme based on the time-delay Lorenz system is presented in this paper with the description of Circulant matrix. Making use of the chaotic sequence generated by the time-delay Lorenz system, the pixel permutation is carried out in diagonal and antidiagonal directions according to the first and second components. Then, a pseudorandom chaotic sequence is generated again from time-delay Lorenz system using all components. Modular operation is further employed for diffusion by blocks, in which the control parameter is generated depending on the plain-image. Numerical experiments show that the proposed scheme possesses the properties of a large key space to resist brute-force attack, sensitive dependence on secret keys, uniform distribution of gray values in the cipher-image, and zero correlation between two adjacent cipher-image pixels. Therefore, it can be adopted as an effective and fast image encryption algorithm.

  11. Fast-neutron, coded-aperture imager

    International Nuclear Information System (INIS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-01-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  12. Fast-neutron, coded-aperture imager

    Energy Technology Data Exchange (ETDEWEB)

    Woolf, Richard S., E-mail: richard.woolf@nrl.navy.mil; Phlips, Bernard F., E-mail: bernard.phlips@nrl.navy.mil; Hutcheson, Anthony L., E-mail: anthony.hutcheson@nrl.navy.mil; Wulf, Eric A., E-mail: eric.wulf@nrl.navy.mil

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  13. Pseudo color ghost coding imaging with pseudo thermal light

    Science.gov (United States)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  14. PDF file encryption on mobile phone using super-encryption of Variably Modified Permutation Composition (VMPC) and two square cipher algorithm

    Science.gov (United States)

    Rachmawati, D.; Budiman, M. A.; Atika, F.

    2018-03-01

    Data security is becoming one of the most significant challenges in the digital world. Retrieval of data by unauthorized parties will result in harm to the owner of the data. PDF data are also susceptible to data security disorder. These things affect the security of the information. To solve the security problem, it needs a method to maintain the protection of the data, such as cryptography. In cryptography, several algorithms can encode data, one of them is Two Square Cipher algorithm which is a symmetric algorithm. At this research, Two Square Cipher algorithm has already developed into a 16 x 16 key aims to enter the various plaintexts. However, for more enhancement security it will be combined with the VMPC algorithm which is a symmetric algorithm. The combination of the two algorithms is called with the super-encryption. At this point, the data already can be stored on a mobile phone allowing users to secure data flexibly and can be accessed anywhere. The application of PDF document security on this research built by Android-platform. At this study will also calculate the complexity of algorithms and process time. Based on the test results the complexity of the algorithm is θ (n) for Two Square Cipher and θ (n) for VMPC algorithm, so the complexity of the super-encryption is also θ (n). VMPC algorithm processing time results quicker than on Two Square Cipher. And the processing time is directly proportional to the length of the plaintext and passwords.

  15. The (related-key) impossible boomerang attack and its application to the AES block cipher

    NARCIS (Netherlands)

    Lu, J.

    2011-01-01

    The Advanced Encryption Standard (AES) is a 128-bit block cipher with a user key of 128, 192 or 256 bits, released by NIST in 2001 as the next-generation data encryption standard for use in the USA. It was adopted as an ISO international standard in 2005. Impossible differential cryptanalysis and

  16. Results from the coded aperture neutron imaging system

    International Nuclear Information System (INIS)

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Marleau, Peter

    2010-01-01

    Because of their penetrating power, energetic neutrons and gamma rays (∼1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging - a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  17. 2-Step scalar deadzone quantization for bitplane image coding.

    Science.gov (United States)

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  18. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  19. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  20. Pseudo real-time coded aperture imaging system with intensified vidicon cameras

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.

    1977-01-01

    A coded image displayed on a TV monitor was used to directly reconstruct a decoded image. Both the coded and the decoded images were viewed with intensified vidicon cameras. The coded aperture was a 15-element nonredundant pinhole array. The coding and decoding were accomplished simultaneously during the scanning of a single 16-msec TV frame

  1. Fast-neutron, coded-aperture imager

    Science.gov (United States)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  2. Code-modulated interferometric imaging system using phased arrays

    Science.gov (United States)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  3. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  4. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    Science.gov (United States)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  5. An efficient fractal image coding algorithm using unified feature and DCT

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2009-01-01

    Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.

  6. Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number

    OpenAIRE

    Kohei Arai; Yuji Yamada

    2011-01-01

    An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...

  7. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  8. Collaborative Image Coding and Transmission over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Min Wu

    2007-01-01

    Full Text Available The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission.

  9. Coded aperture imaging system for nuclear fuel motion detection

    International Nuclear Information System (INIS)

    Stalker, K.T.; Kelly, J.G.

    1980-01-01

    A Coded Aperature Imaging System (CAIS) has been developed at Sandia National Laboratories to image the motion of nuclear fuel rods undergoing tests simulating accident conditions within a liquid metal fast breeder reactor. The tests require that the motion of the test fuel be monitored while it is immersed in a liquid sodium coolant precluding the use of normal optical means of imaging. However, using the fission gamma rays emitted by the fuel itself and coded aperture techniques, images with 1.5 mm radial and 5 mm axial resolution have been attained. Using an electro-optical detection system coupled to a high speed motion picture camera a time resolution of one millisecond can be achieved. This paper will discuss the application of coded aperture imaging to the problem, including the design of the one-dimensional Fresnel zone plate apertures used and the special problems arising from the reactor environment and use of high energy gamma ray photons to form the coded image. Also to be discussed will be the reconstruction techniques employed and the effect of various noise sources on system performance. Finally, some experimental results obtained using the system will be presented

  10. Attacking 44 Rounds of the SHACAL-2 Block Cipher Using Related-Key Rectangle Cryptanalysis

    Science.gov (United States)

    Lu, Jiqiang; Kim, Jongsung

    SHACAL-2 is a 64-round block cipher with a 256-bit block size and a variable length key of up to 512 bits. It is a NESSIE selected block cipher algorithm. In this paper, we observe that, when checking whether a candidate quartet is useful in a (related-key) rectangle attack, we can check the two pairs from the quartet one after the other, instead of checking them simultaneously; if the first pair does not meet the expected conditions, we can discard the quartet immediately. We next exploit a 35-round related-key rectangle distinguisher with probability 2-460 for the first 35 rounds of SHACAL-2, which is built on an existing 24-round related-key differential and a new 10-round differential. Finally, taking advantage of the above observation, we use the distinguisher to mount a related-key rectangle attack on the first 44 rounds of SHACAL-2. The attack requires 2233 related-key chosen plaintexts, and has a time complexity of 2497.2 computations. This is better than any previously published cryptanalytic results on SHACAL-2 in terms of the numbers of attacked rounds.

  11. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  12. Performance of JPEG Image Transmission Using Proposed Asymmetric Turbo Code

    Directory of Open Access Journals (Sweden)

    Siddiqi Mohammad Umar

    2007-01-01

    Full Text Available This paper gives the results of a simulation study on the performance of JPEG image transmission over AWGN and Rayleigh fading channels using typical and proposed asymmetric turbo codes for error control coding. The baseline JPEG algorithm is used to compress a QCIF ( "Suzie" image. The recursive systematic convolutional (RSC encoder with generator polynomials , that is, (13/11 in decimal, and 3G interleaver are used for the typical WCDMA and CDMA2000 turbo codes. The proposed asymmetric turbo code uses generator polynomials , that is, (13/11; 13/9 in decimal, and a code-matched interleaver. The effect of interleaver in the proposed asymmetric turbo code is studied using weight distribution and simulation. The simulation results and performance bound for proposed asymmetric turbo code for the frame length , code rate with Log-MAP decoder over AWGN channel are compared with the typical system. From the simulation results, it is observed that the image transmission using proposed asymmetric turbo code performs better than that with the typical system.

  13. Results from the Coded Aperture Neutron Imaging System (CANIS)

    International Nuclear Information System (INIS)

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Hilton, Nathan R.; Marleau, Peter

    2010-01-01

    Because of their penetrating power, energetic neutrons and gamma rays (∼1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging- a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  14. Improved cryptanalysis of the block cipher KASUMI

    DEFF Research Database (Denmark)

    Jia, Keting; Li, Leibo; Rechberger, Christian

    2013-01-01

    KASUMI is a block cipher which consists of eight Feistel rounds with a 128-bit key. Proposed more than 10 years ago, the confidentiality and integrity of 3G mobile communications systems depend on the security of KASUMI. In the practically interesting single key setting, only up to 6 rounds have...... been attacked so far. In this paper we use some observations on the FL and FO functions. Combining these observations with a key schedule weakness, we select some special input and output values to refine the general 5-round impossible differentials and propose the first 7-round attack on KASUMI...... with time and data complexities similar to the previously best 6-round attacks. This leaves now only a single round of security margin. The new impossible differential attack on the last 7 rounds needs 2114.3 encryptions with 252.5 chosen plaintexts. For the attack on the first 7 rounds, the data complexity...

  15. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  16. Correlated statistical uncertainties in coded-aperture imaging

    International Nuclear Information System (INIS)

    Fleenor, Matthew C.; Blackston, Matthew A.; Ziock, Klaus P.

    2015-01-01

    In nuclear security applications, coded-aperture imagers can provide a wealth of information regarding the attributes of both the radioactive and nonradioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be determined in a quantitative and statistically meaningful manner. To address a deficiency of quantifiable errors in coded-aperture imaging, we present uncertainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti-mask data are included. Utilizing simulated point source data, we found that correlations arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations was heightened by the process of over-sampling, while correlations were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explored how statistics-based alarming is impacted in a radiological search scenario

  17. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  18. Decoding using back-project algorithm from coded image in ICF

    International Nuclear Information System (INIS)

    Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan

    1999-01-01

    The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments

  19. Information retrieval based on single-pixel optical imaging with quick-response code

    Science.gov (United States)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  20. The SKINNY Family of Block Ciphers and Its Low-Latency Variant MANTIS

    DEFF Research Database (Denmark)

    Beierle, Christof; Jean, Jérémy; Kölbl, Stefan

    2016-01-01

    We present a new tweakable block cipher family SKINNY, whose goal is to compete with NSA recent design SIMON in terms of hardware/ software performances, while proving in addition much stronger security guarantees with regards to differential/linear attacks. In particular, unlike SIMON, we are able...... to provide strong bounds for all versions, and not only in the single-key model, but also in the related-key or related-tweak model. SKINNY has flexible block/key/tweak sizes and can also benefit from very efficient threshold implementations for sidechannel protection. Regarding performances, it outperforms...

  1. Zone-plate coded imaging of thermonuclear burn

    International Nuclear Information System (INIS)

    Ceglio, N.M.

    1978-01-01

    The first high-resolution, direct images of the region of thermonuclear burn in laser fusion experiments have been produced using a novel, two-step imaging technique called zone-plate coded imaging. This technique is extremely versatile and well suited for the microscopy of laser fusion targets. It has a tomographic capability, which provides three-dimensional images of the source distribution. It is equally useful for imaging x-ray and particle emissions. Since this technique is much more sensitive than competing imaging techniques, it permits us to investigate low-intensity sources

  2. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  3. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-06-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations in the dynamics of the system. Such defects are illuminated through a new technique of generalized post proceeding with very low hardware cost. The thesis further discusses two encryption algorithms designed and implemented as a block cipher and a stream cipher. The security of both systems is thoroughly analyzed and the performance is compared with other reported systems showing a superior results. Both systems are realized on Xilinx Vetrix-4 FPGA with a hardware and throughput performance surpassing known encryption systems.

  4. Scintillator Based Coded-Aperture Imaging for Neutron Detection

    International Nuclear Information System (INIS)

    Hayes, Sean-C.; Gamage, Kelum-A-A.

    2013-06-01

    In this paper we are going to assess the variations of neutron images using a series of Monte Carlo simulations. We are going to study neutron images of the same neutron source with different source locations, using a scintillator based coded-aperture system. The Monte Carlo simulations have been conducted making use of the EJ-426 neutron scintillator detector. This type of detector has a low sensitivity to gamma rays and is therefore of particular use in a system with a source that emits a mixed radiation field. From the use of different source locations, several neutron images have been produced, compared both qualitatively and quantitatively for each case. This allows conclusions to be drawn on how suited the scintillator based coded-aperture neutron imaging system is to detecting various neutron source locations. This type of neutron imaging system can be easily used to identify and locate nuclear materials precisely. (authors)

  5. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  6. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  7. Hybrid coded aperture and Compton imaging using an active mask

    International Nuclear Information System (INIS)

    Schultz, L.J.; Wallace, M.S.; Galassi, M.C.; Hoover, A.S.; Mocko, M.; Palmer, D.M.; Tornga, S.R.; Kippen, R.M.; Hynes, M.V.; Toolin, M.J.; Harris, B.; McElroy, J.E.; Wakeford, D.; Lanza, R.C.; Horn, B.K.P.; Wehe, D.K.

    2009-01-01

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  8. Practicing the Code of Ethics, finding the image of God.

    Science.gov (United States)

    Hoglund, Barbara A

    2013-01-01

    The Code of Ethics for Nurses gives a professional obligation to practice in a compassionate and respectful way that is unaffected by the attributes of the patient. This article explores the concept "made in the image of God" and the complexities inherent in caring for those perceived as exhibiting distorted images of God. While the Code provides a professional standard consistent with a biblical worldview, human nature impacts the ability to consistently act congruently with the Code. Strategies and nursing interventions that support development of practice from a biblical worldview and the Code of Ethics for Nurses are presented.

  9. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  10. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  11. Key-Alternating Ciphers in a Provable Setting: Encryption Using a Small Number of Public Permutations (Extended Abstract)

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knudsen, L.R.; Leander, Gregor

    2012-01-01

    show that the distribution of Fourier coefficients for the cipher over all keys is close to ideal. Lastly, we define a practical instance of the construction with t = 2 using AES referred to as AES2. Any attack on AES2 with complexity below 285 will have to make use of AES with a fixed known key...

  12. A Method for Improving the Progressive Image Coding Algorithms

    Directory of Open Access Journals (Sweden)

    Ovidiu COSMA

    2014-12-01

    Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.

  13. Progressive Coding of Palette Images and Digital Maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salinas, J. Martin

    2002-01-01

    image layer. The resolution is increased by a factor of 2 in each step. The 2D PPM coding is applied to palette images and street maps. The sequential results are comparable to PWC. The PPM results are a little better for the palette images with few colors (up to 4-5 bpp) and a little worse...

  14. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  15. JJ1017 committee report: image examination order codes--standardized codes for imaging modality, region, and direction with local expansion: an extension of DICOM.

    Science.gov (United States)

    Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Tani, Shigeki; Sakusabe, Takaya; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun

    2002-06-01

    The digital imaging and communications in medicine (DICOM) standard includes parts regarding nonimage data information, such as image study ordering data and performed procedure data, and is used for sharing information between HIS/RIS and modality systems, which is essential for IHE. To bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS established the JJ1017 management guideline, specifying, for example, which items are legally required in Japan, while remaining optional in the DICOM standard. In Japan, the contents of orders from referring physicians for radiographic examinations include details of the examination. Such details are not used typically by referring physicians requesting radiographic examinations in the United States, because radiologists in the United States often determine the examination protocol. The DICOM standard has code tables for examination type, region, and direction for image examination orders. However, this investigation found that it does not include items that are detailed sufficiently for use in Japan, because of the above-mentioned reason. To overcome these drawbacks, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. This report introduces the JJ1017 code. These codes (the study type codes in particular) must be expandable to keep up with technical advances in equipment. Expansion has 2 directions: width for covering more categories and depth for specifying the information in more detail (finer categories). The JJ1017 code takes these requirements into consideration and clearly distinguishes between the stem part as the common term and the expansion. The stem part of the JJ1017 code partially utilizes the DICOM codes to remain in line with the DICOM standard. This work is an example of how local requirements can be met by using the DICOM standard and extending it.

  16. Design of an image encryption scheme based on a multiple chaotic map

    Science.gov (United States)

    Tong, Xiao-Jun

    2013-07-01

    In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.

  17. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, F. N.; Hu, H. S., E-mail: huasi-hu@mail.xjtu.edu.cn; Wang, D. M.; Jia, J. [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Zhang, T. K. [Laser Fusion Research Center, CAEP, Mianyang, 621900 Sichuan (China); Jia, Q. G. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China)

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  18. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  19. Application and Misapplication of the Czechoslovak STP Cipher During WWII – Report on an Unpublished Manuscript

    Czech Academy of Sciences Publication Activity Database

    Porubský, Štefan

    2017-01-01

    Roč. 70, č. 1 (2017), s. 41-91 ISSN 1210-3195 Institutional support: RVO:67985807 Keywords : STP cipher * Josef Růžek * Karol Cigáň * František Moravec * Czechoslovak military cryptography * Word War II Subject RIV: BA - General Mathematics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) https://tatra.mat.savba.sk/paper.php?id_paper=1412

  20. A new coding concept for fast ultrasound imaging using pulse trains

    DEFF Research Database (Denmark)

    Misaridis, T.; Jensen, Jørgen Arendt

    2002-01-01

    Frame rate in ultrasound imaging can he increased by simultaneous transmission of multiple beams using coded waveforms. However, the achievable degree of orthogonality among coded waveforms is limited in ultrasound, and the image quality degrades unacceptably due to interbeam interference....... In this paper, an alternative combined time-space coding approach is undertaken. In the new method all transducer elements are excited with short pulses and the high time-bandwidth (TB) product waveforms are generated acoustically. Each element transmits a short pulse spherical wave with a constant transmit...... delay from element to element, long enough to assure no pulse overlapping for all depths in the image. Frequency shift keying is used for "per element" coding. The received signals from a point scatterer are staggered pulse trains which are beamformed for all beam directions and further processed...

  1. Joint Schemes for Physical Layer Security and Error Correction

    Science.gov (United States)

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  2. 110 °C range athermalization of wavefront coding infrared imaging systems

    Science.gov (United States)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  3. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Science.gov (United States)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  4. Radionuclide imaging with coded apertures and three-dimensional image reconstruction from focal-plane tomography

    International Nuclear Information System (INIS)

    Chang, L.T.

    1976-05-01

    Two techniques for radionuclide imaging and reconstruction have been studied;; both are used for improvement of depth resolution. The first technique is called coded aperture imaging, which is a technique of tomographic imaging. The second technique is a special 3-D image reconstruction method which is introduced as an improvement to the so called focal-plane tomography

  5. Possibilities and testing of CPRNG in block cipher mode of operation PM-DC-LM

    Energy Technology Data Exchange (ETDEWEB)

    Zacek, Petr; Jasek, Roman; Malanik, David [Faculty of applied Informatics, Tomas Bata University in Zlin, Zlin, Czech Republic zacek@fai.utb.cz, jasek@fai.utb.cz, dmalanik@fai.utb.cz (Czech Republic)

    2016-06-08

    This paper discusses the chaotic pseudo-random number generator (CPRNG), which is used in block cipher mode of operation called PM-DC-LM. PM-DC-LM is one of possible subversions of general PM mode. In this paper is not discussed the design of PM-DC-LM, but only CPRNG as a part of it because designing is written in other papers. Possibilities, how to change or to improve CPRNG are mentioned. The final part is devoted for a little testing of CPRNG and some testing data are shown.

  6. Possibilities and testing of CPRNG in block cipher mode of operation PM-DC-LM

    Science.gov (United States)

    Zacek, Petr; Jasek, Roman; Malanik, David

    2016-06-01

    This paper discusses the chaotic pseudo-random number generator (CPRNG), which is used in block cipher mode of operation called PM-DC-LM. PM-DC-LM is one of possible subversions of general PM mode. In this paper is not discussed the design of PM-DC-LM, but only CPRNG as a part of it because designing is written in other papers. Possibilities, how to change or to improve CPRNG are mentioned. The final part is devoted for a little testing of CPRNG and some testing data are shown.

  7. Possibilities and testing of CPRNG in block cipher mode of operation PM-DC-LM

    International Nuclear Information System (INIS)

    Zacek, Petr; Jasek, Roman; Malanik, David

    2016-01-01

    This paper discusses the chaotic pseudo-random number generator (CPRNG), which is used in block cipher mode of operation called PM-DC-LM. PM-DC-LM is one of possible subversions of general PM mode. In this paper is not discussed the design of PM-DC-LM, but only CPRNG as a part of it because designing is written in other papers. Possibilities, how to change or to improve CPRNG are mentioned. The final part is devoted for a little testing of CPRNG and some testing data are shown.

  8. Highly parallel line-based image coding for many cores.

    Science.gov (United States)

    Peng, Xiulian; Xu, Jizheng; Zhou, You; Wu, Feng

    2012-01-01

    Computers are developing along with a new trend from the dual-core and quad-core processors to ones with tens or even hundreds of cores. Multimedia, as one of the most important applications in computers, has an urgent need to design parallel coding algorithms for compression. Taking intraframe/image coding as a start point, this paper proposes a pure line-by-line coding scheme (LBLC) to meet the need. In LBLC, an input image is processed line by line sequentially, and each line is divided into small fixed-length segments. The compression of all segments from prediction to entropy coding is completely independent and concurrent at many cores. Results on a general-purpose computer show that our scheme can get a 13.9 times speedup with 15 cores at the encoder and a 10.3 times speedup at the decoder. Ideally, such near-linear speeding relation with the number of cores can be kept for more than 100 cores. In addition to the high parallelism, the proposed scheme can perform comparatively or even better than the H.264 high profile above middle bit rates. At near-lossless coding, it outperforms H.264 more than 10 dB. At lossless coding, up to 14% bit-rate reduction is observed compared with H.264 lossless coding at the high 4:4:4 profile.

  9. Comparisons of coded aperture imaging using various apertures and decoding methods

    International Nuclear Information System (INIS)

    Chang, L.T.; Macdonald, B.; Perez-Mendez, V.

    1976-07-01

    The utility of coded aperture γ camera imaging of radioisotope distributions in Nuclear Medicine is in its ability to give depth information about a three dimensional source. We have calculated imaging with Fresnel zone plate and multiple pinhole apertures to produce coded shadows and reconstruction of these shadows using correlation, Fresnel diffraction, and Fourier transform deconvolution. Comparisons of the coded apertures and decoding methods are made by evaluating their point response functions both for in-focus and out-of-focus image planes. Background averages and standard deviations were calculated. In some cases, background subtraction was made using combinations of two complementary apertures. Results using deconvolution reconstruction for finite numbers of events are also given

  10. Dragon Stream Cipher for Secure Blackbox Cockpit Voice Recorder

    Science.gov (United States)

    Akmal, Fadira; Michrandi Nasution, Surya; Azmi, Fairuz

    2017-11-01

    Aircraft blackbox is a device used to record all aircraft information, which consists of Flight Data Recorder (FDR) and Cockpit Voice Recorder (CVR). Cockpit Voice Recorder contains conversations in the aircraft during the flight.Investigations on aircraft crashes usually take a long time, because it is difficult to find the aircraft blackbox. Then blackbox should have the ability to send information to other places. Aircraft blackbox must have a data security system, data security is a very important part at the time of information exchange process. The system in this research is to perform the encryption and decryption process on Cockpit Voice Recorder by people who are entitled by using Dragon Stream Cipher algorithm. The tests performed are time of data encryption and decryption, and avalanche effect. Result in this paper show us time encryption and decryption are 0,85 seconds and 1,84 second for 30 seconds Cockpit Voice Recorder data witn an avalanche effect 48,67 %.

  11. Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling

    Directory of Open Access Journals (Sweden)

    Ertürk Sarp

    2007-01-01

    Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.

  12. A multiplex coding imaging spectrometer for X-ray astronomy

    International Nuclear Information System (INIS)

    Rocchia, R.; Deschamps, J.Y.; Koch-Miramond, L.; Tarrius, A.

    1985-06-01

    The paper describes a multiplex coding system associated with a solid state spectrometer Si(Li) designed to be placed at the focus of a grazing incidence telescope. In this instrument the spectrometric and imaging functions are separated. The coding system consists in a movable mask with pseudo randomly distributed holes, located in the focal plane of the telescope. The pixel size lies in the range 100-200 microns. The close association of the coding system with a Si(Li) detector gives an imaging spectrometer combining the good efficiency (50% between 0,5 and 10 keV) and energy resolution (ΔE approximately 90 to 160 eV) of solid state spectrometers with the spatial resolution of the mask. Simulations and results obtained with a laboratory model are presented

  13. Modern Cryptanalysis Techniques for Advanced Code Breaking

    CERN Document Server

    Swenson, Christopher

    2008-01-01

    As an instructor at the University of Tulsa, Christopher Swenson could find no relevant text for teaching modern cryptanalysis?so he wrote his own. This is the first book that brings the study of cryptanalysis into the 21st century. Swenson provides a foundation in traditional cryptanalysis, examines ciphers based on number theory, explores block ciphers, and teaches the basis of all modern cryptanalysis: linear and differential cryptanalysis. This time-honored weapon of warfare has become a key piece of artillery in the battle for information security.

  14. Chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Guan Zhihong; Huang Fangjun; Guan Wenjie

    2005-01-01

    In this Letter, a new image encryption scheme is presented, in which shuffling the positions and changing the grey values of image pixels are combined to confuse the relationship between the cipher-image and the plain-image. Firstly, the Arnold cat map is used to shuffle the positions of the image pixels in the spatial-domain. Then the discrete output signal of the Chen's chaotic system is preprocessed to be suitable for the grayscale image encryption, and the shuffled image is encrypted by the preprocessed signal pixel by pixel. The experimental results demonstrate that the key space is large enough to resist the brute-force attack and the distribution of grey values of the encrypted image has a random-like behavior

  15. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  16. Alignment effects on a neutron imaging system using coded apertures

    International Nuclear Information System (INIS)

    Thfoin, Isabelle; Landoas, Olivier; Caillaud, Tony; Vincent, Maxime; Bourgade, Jean-Luc; Rosse, Bertrand; Disdier, Laurent; Sangster, Thomas C.; Glebov, Vladimir Yu.; Pien, Greg; Armstrong, William

    2010-01-01

    A high resolution neutron imaging system is being developed and tested on the OMEGA laser facility for inertial confinement fusion experiments. This diagnostic uses a coded imaging technique with a penumbral or an annular aperture. The sensitiveness of these techniques to misalignment was pointed out with both experiments and simulations. Results obtained during OMEGA shots are in good agreement with calculations performed with the Monte Carlo code GEANT4. Both techniques are sensitive to the relative position of the source in the field of view. The penumbral imaging technique then demonstrates to be less sensitive to misalignment compared to the ring. These results show the necessity to develop a neutron imaging diagnostic for megajoule class lasers taking into account our alignment capabilities on such facilities.

  17. Study on 3D gamma-ray imaging for medical diagnosis with coded aperture

    International Nuclear Information System (INIS)

    Horiki, Kazunari; Shimazoe, Kenji; Ohno, Masashi; Takahashi, Hiroyuki; Kobashi, Keiji; Moro, Eiji

    2014-01-01

    The conventional methods for medical imaging have several disadvantages such as restriction on the energy and detection efficiency. Coded aperture imaging can be used for medical imagings without restriction on the energy, which makes it possible to use multiple tracers in diagnosis. The detection efficiency of Coded aperture imaging is ten times better than that of the pinhole collimator. First, simulations of the coded aperture imaging have been done to confirm M-array's effectiveness. Second, two experiments have been done with low-energy gamma-ray (122 keV( 57 Co)) and with high-energy gamma-ray (662 keV( 137 Cs)). In both cases reconstructed image was successfully acquired. The measured spatial resolution in the experiment using 57 Co is 4.3 mm (FWHM). (author)

  18. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  19. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology

    Directory of Open Access Journals (Sweden)

    Shuo Chen

    2018-01-01

    Full Text Available As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D TCAI architecture based on single input multiple output (SIMO technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  20. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  1. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  2. Interactive QR code beautification with full background image embedding

    Science.gov (United States)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  3. Implementational Aspects of the Contourlet Filter Bank and Application in Image Coding

    Directory of Open Access Journals (Sweden)

    Truong T. Nguyen

    2009-02-01

    Full Text Available This paper analyzed the implementational aspects of the contourlet filter bank (or the pyramidal directional filter bank (PDFB, and considered its application in image coding. First, details of the binary tree-structured directional filter bank (DFB are presented, including a modification to minimize the phase delay factor and necessary steps for handling rectangular images. The PDFB is viewed as an overcomplete filter bank, and the directional filters are expressed in terms of polyphase components of the pyramidal filter bank and the conventional DFB. The aliasing effect of the conventional DFB and the Laplacian pyramid to the directional filters is then considered, and the conditions for reducing this effect are presented. The new filters obtained by redesigning the PDFBs satisfying these requirements have much better frequency responses. A hybrid multiscale filter bank consisting of the PDFB at higher scales and the traditional maximally decimated wavelet filter bank at lower scales is constructed to provide a sparse image representation. A novel embedded image coding system based on the image decomposition and a morphological dilation algorithm is then presented. The coding algorithm efficiently clusters the significant coefficients using progressive morphological operations. Context models for arithmetic coding are designed to exploit the intraband dependency and the correlation existing among the neighboring directional subbands. Experimental results show that the proposed coding algorithm outperforms the current state-of-the-art wavelet-based coders, such as JPEG2000, for images with directional features.

  4. Cryptanalysis of a chaos block cipher for wireless sensor network

    Science.gov (United States)

    Yang, Jiyun; Xiao, Di; Xiang, Tao

    2011-02-01

    Based on the analysis of a chaos block cipher for wireless sensor network (WSN), it is found that there is a fatal flaw in its security because the number of rounds is too small and the calculation precision of round function is too short. The scheme could be cryptanalyzed by utilizing differential cryptanalysis theory. First, the third round key is recovered by chosen plaintext attack according to the characteristics of the round function. Then, the second round key can be deduced from the relationship of the sub-keys between the second and the third rounds. Based on the above successful attacks, the first round key could also be broken by brute-force attack. Finally, by employing the characteristics of Feistel structure, the fourth round key could also be obtained. Since all round keys have been cryptanalyzed, the plaintext can then be decrypted. The encryption scheme is proven to be insecure consequently.

  5. Fractal image coding by an approximation of the collage error

    Science.gov (United States)

    Salih, Ismail; Smith, Stanley H.

    1998-12-01

    In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.

  6. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    Science.gov (United States)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator

  7. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  8. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B.; Heidrich, Wolfgang

    2014-01-01

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  9. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix

    2014-10-17

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  10. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Digital filtering and reconstruction of coded aperture images

    International Nuclear Information System (INIS)

    Tobin, K.W. Jr.

    1987-01-01

    The real-time neutron radiography facility at the University of Virginia has been used for both transmission radiography and computed tomography. Recently, a coded aperture system has been developed to permit the extraction of three dimensional information from a low intensity field of radiation scattered by an extended object. Short wave-length radiations (e.g. neutrons) are not easily image because of the difficulties in achieving diffraction and refraction with a conventional lens imaging system. By using a coded aperture approach, an imaging system has been developed that records and reconstructs an object from an intensity distribution. This system has a signal-to-noise ratio that is proportional to the total open area of the aperture making it ideal for imaging with a limiting intensity radiation field. The main goal of this research was to develope and implement the digital methods and theory necessary for the reconstruction process. Several real-time video systems, attached to an Intellect-100 image processor, a DEC PDP-11 micro-computer, and a Convex-1 parallel processing mainframe were employed. This system, coupled with theoretical extensions and improvements, allowed for retrieval of information previously unobtainable by earlier optical methods. The effect of thermal noise, shot noise, and aperture related artifacts were examined so that new digital filtering techniques could be constructed and implemented. Results of image data filtering prior to and following the reconstruction process are reported. Improvements related to the different signal processing methods are emphasized. The application and advantages of this imaging technique to the field of non-destructive testing are also discussed

  12. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  13. The location and recognition of anti-counterfeiting code image with complex background

    Science.gov (United States)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  14. Designing an efficient LT-code with unequal error protection for image transmission

    Science.gov (United States)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression

  15. Clinical use and evaluation of coded excitation in B-mode images

    DEFF Research Database (Denmark)

    Misaridis, Athanasios; Pedersen, M. H.; Jensen, Jørgen Arendt

    2000-01-01

    on a predistorted FM excitation and a mismatched compression filter designed for medical ultrasonic applications. The attenuation effect, analyzed in this paper using the ambiguity function and simulations, dictated the choice of the coded waveform. In this study clinical images, images of wire phantoms......Use of long encoded waveforms can be advantageous in ultrasound imaging, as long as the pulse compression mechanism ensures low range sidelobes and preserves both axial resolution and contrast. A coded excitation/compression scheme was previously presented by our group, which is based...... was programmed to allow alternating excitation on every second frame. That offers the possibility of direct comparison of the same set of image pairs; one with pulsed and one with encoded excitation. Abdominal clinical images from healthy volunteers were acquired and statistically analyzed by means of the auto...

  16. Chaos-based image encryption algorithm [rapid communication

    Science.gov (United States)

    Guan, Zhi-Hong; Huang, Fangjun; Guan, Wenjie

    2005-10-01

    In this Letter, a new image encryption scheme is presented, in which shuffling the positions and changing the grey values of image pixels are combined to confuse the relationship between the cipher-image and the plain-image. Firstly, the Arnold cat map is used to shuffle the positions of the image pixels in the spatial-domain. Then the discrete output signal of the Chen's chaotic system is preprocessed to be suitable for the grayscale image encryption, and the shuffled image is encrypted by the preprocessed signal pixel by pixel. The experimental results demonstrate that the key space is large enough to resist the brute-force attack and the distribution of grey values of the encrypted image has a random-like behavior.

  17. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    Science.gov (United States)

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  18. A Colour Image Encryption Scheme Using Permutation-Substitution Based on Chaos

    Directory of Open Access Journals (Sweden)

    Xing-Yuan Wang

    2015-06-01

    Full Text Available An encryption scheme for colour images using a spatiotemporal chaotic system is proposed. Initially, we use the R, G and B components of a colour plain-image to form a matrix. Then the matrix is permutated by using zigzag path scrambling. The resultant matrix is then passed through a substitution process. Finally, the ciphered colour image is obtained from the confused matrix. Theoretical analysis and experimental results indicate that the proposed scheme is both secure and practical, which make it suitable for encrypting colour images of any size.

  19. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  20. The consequences of multiplexing and limited view angle in coded-aperture imaging

    International Nuclear Information System (INIS)

    Smith, W.E.; Barrett, H.H.; Paxman, R.G.

    1984-01-01

    Coded-aperture imaging (CAI) is a method for reconstructing distributions of radionuclide tracers that offers advantages over ECT and PET; namely, many views can be taken simultaneously without detector motion, and large numbers of photons are utilized since collimators are not required. However, because of this type of data acquisition, the coded image suffers from multiplexing; i.e., more than one object point may be mapped to each detector in the coded image. To investigate the dependence of the reconstruction on multiplexing, the authors reconstruct a simulated two-dimensional circular object from multiplexed one-dimensional coded-image data, then perform the reconstruction from un-multiplexed data. Each of these reconstructions are produced both from noise-free and noisy simulated data. To investigate the dependence on view angle, the authors reconstruct two simulated three-dimensional objects; a spherical phantom, and a series of point-like objects arranged nearly in a plane. Each of these reconstructions are from multiplexed two-dimensional coded-image data, first using two orthogonal views, and then a single viewing direction. The two-dimensional reconstructions demonstrate that, in the noise-free case, the multiplexing of the data does not seriously affect the reconstruction equality and that in the noisy-data case, the multiplexing helps, due to the fact that more photons are collected. Also, for point-like objects confined to a near-planar region of space, the authors show that restricted views can give satisfactory results, but that, for a large, three-dimensional object, a more complete viewing geometry is required

  1. Science and Society: The Secret History of Secret Codes

    CERN Multimedia

    2002-01-01

    With the arrival of the Web, encryption has become a major problem for computer security engineers, as well as an international sport for many cyber hackers. But humans have been communicating in code for as long as they have been writing, as Simon Singh points out in his book, 'The Code Book', published in 1999. At the end of the book, there is a series of ten encoded messages, each from a different phase in the history of cryptography. There was a prize of £10,000 for the first person to crack all ten messages. It took a team of five Swedish researchers a year and a month to solve the challenge. Simon Singh can now reveal the story behind the Cipher Challenge and this is what he will do in his lecture at CERN, explaining how mathematics can be used to crack codes, the role of encryption during World War II and how they both help to guarantee security in today's Information Age. Simon Singh, who has a PhD in physics, completed his thesis on the UA2 experiment at CERN. In 1991, he joined the BBC Sc...

  2. Test and Verification of AES Used for Image Encryption

    Science.gov (United States)

    Zhang, Yong

    2018-03-01

    In this paper, an image encryption program based on AES in cipher block chaining mode was designed with C language. The encryption/decryption speed and security performance of AES based image cryptosystem were tested and used to compare the proposed cryptosystem with some existing image cryptosystems based on chaos. Simulation results show that AES can apply to image encryption, which refutes the widely accepted point of view that AES is not suitable for image encryption. This paper also suggests taking the speed of AES based image encryption as the speed benchmark of image encryption algorithms. And those image encryption algorithms whose speeds are lower than the benchmark should be discarded in practical communications.

  3. MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Vimala

    2012-05-01

    Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.

  4. Third order harmonic imaging for biological tissues using three phase-coded pulses.

    Science.gov (United States)

    Ma, Qingyu; Gong, Xiufen; Zhang, Dong

    2006-12-22

    Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.

  5. Piecewise spectrally band-pass for compressive coded aperture spectral imaging

    International Nuclear Information System (INIS)

    Qian Lu-Lu; Lü Qun-Bo; Huang Min; Xiang Li-Bin

    2015-01-01

    Coded aperture snapshot spectral imaging (CASSI) has been discussed in recent years. It has the remarkable advantages of high optical throughput, snapshot imaging, etc. The entire spatial-spectral data-cube can be reconstructed with just a single two-dimensional (2D) compressive sensing measurement. On the other hand, for less spectrally sparse scenes, the insufficiency of sparse sampling and aliasing in spatial-spectral images reduce the accuracy of reconstructed three-dimensional (3D) spectral cube. To solve this problem, this paper extends the improved CASSI. A band-pass filter array is mounted on the coded mask, and then the first image plane is divided into some continuous spectral sub-band areas. The entire 3D spectral cube could be captured by the relative movement between the object and the instrument. The principle analysis and imaging simulation are presented. Compared with peak signal-to-noise ratio (PSNR) and the information entropy of the reconstructed images at different numbers of spectral sub-band areas, the reconstructed 3D spectral cube reveals an observable improvement in the reconstruction fidelity, with an increase in the number of the sub-bands and a simultaneous decrease in the number of spectral channels of each sub-band. (paper)

  6. Image transmission in multicore-fiber code-division multiple access network

    Science.gov (United States)

    Yang, Guu-Chang; Kwong, Wing C.

    1997-01-01

    Recently, two-dimensional (2-D) signature patterns were proposed to encode binary digitized image pixels in optical code-division multiple-access (CDMA) networks with 'multicore' fiber. The new technology enables parallel transmission and simultaneous access of 2-D images in multiple-access environment, where these signature patterns are defined as optical orthogonal signature pattern codes (OOSPCs). However, previous work on OOSPCs assumed that the weight of each signature pattern was the same. In this paper, we construct a new family of OOSPCs with the removal of this assumption. Since varying the weight of a user's signature pattern affects that user's performance, this approach is useful for CDMA optical systems with multiple performance requirements.

  7. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  8. A new image cipher in time and frequency domains

    Science.gov (United States)

    Abd El-Latif, Ahmed A.; Niu, Xiamu; Amin, Mohamed

    2012-10-01

    Recently, various encryption techniques based on chaos have been proposed. However, most existing chaotic encryption schemes still suffer from fundamental problems such as small key space, weak security function and slow performance speed. This paper introduces an efficient encryption scheme for still visual data that overcome these disadvantages. The proposed scheme is based on hybrid Linear Feedback Shift Register (LFSR) and chaotic systems in hybrid domains. The core idea is to scramble the pixel positions based on 2D chaotic systems in frequency domain. Then, the diffusion is done on the scrambled image based on cryptographic primitive operations and the incorporation of LFSR and chaotic systems as round keys. The hybrid compound of LFSR, chaotic system and cryptographic primitive operations strengthen the encryption performance and enlarge the key space required to resist the brute force attacks. Results of statistical and differential analysis show that the proposed algorithm has high security for secure digital images. Furthermore, it has key sensitivity together with a large key space and is very fast compared to other competitive algorithms.

  9. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    Science.gov (United States)

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  10. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  11. Information hiding technology and application analysis based on decimal expansion of irrational numbers

    Science.gov (United States)

    Liu, Xiaoyong; Lu, Pei; Shao, Jianxin; Cao, Haibin; Zhu, Zhenmin

    2017-10-01

    In this paper, an information hiding method using decimal expansion of irrational numbers to generate random phase mask is proposed. Firstly, the decimal expansion parts of irrational numbers generate pseudo-random sequences using a new coding schemed, the irrational number and start and end bit numbers were used as keys in image information hiding. Secondly, we apply the coding schemed to the double phase encoding system, the pseudo-random sequences are taken to generate random phase masks. The mean square error is used to calculate the quality of the recovered image information. Finally, two tests had been carried out to verify the security of our method; the experimental results demonstrate that the cipher image has such features, strong robustness, key sensitivity, and resistance to brute force attack.

  12. Secure transmission of images based on chaotic systems and cipher block chaining

    Science.gov (United States)

    Lakhani, Mahdieh Karimi; Behnam, Hamid; Karimi, Arash

    2013-01-01

    The ever-growing penetration of communication networks, digital and Internet technologies in our everyday lives has the transmission of text data, as well as multimedia data such as images and videos, possible. Digital images have a vast usage in a number of applications, including medicine and providing security authentication, for example. This applicability becomes evident when images, such as walking or people's facial features, are utilized in their identification. Considering the required security level and the properties of images, different algorithms may be used. After key generation using logistic chaos signals, a scrambling function is utilized for image agitation in both horizontal and vertical axes, and then a block-chaining mode of operation may be applied to encrypt the resultant image. The results demonstrate that using the proposed method drastically degrades the correlation between the image components and also the entropy is increased to an acceptable level. Therefore, the image will become greatly resistant to differential attacks. However, the increasing scrambling rounds and the decreasing number of bits of the blocks result in increasing the entropy and decreasing the correlation.

  13. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    Science.gov (United States)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  14. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  15. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  16. Face Image Retrieval of Efficient Sparse Code words and Multiple Attribute in Binning Image

    Directory of Open Access Journals (Sweden)

    Suchitra S

    2017-08-01

    Full Text Available ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

  17. Random mask optimization for fast neutron coded aperture imaging

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, Kyle [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Univ. of California, Los Angeles, CA (United States); Marleau, Peter [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brubaker, Erik [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-05-01

    In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed image quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.

  18. Subjective assessment of impairment in scale-space-coded images

    NARCIS (Netherlands)

    Ridder, de H.; Majoor, G.M.M.

    1988-01-01

    Direct category scaling and a scaling procedure in accordance with Functional Measurement Theory (Anderson, 1982) have been used to assess impairment in scale-space-coded illlages, displayed on a black-and-white TV monitor. The image of a complex scene was passed through a Gaussian filter of limited

  19. An Improved Image Encryption Algorithm Based on Cyclic Rotations and Multiple Chaotic Sequences: Application to Satellite Images

    Directory of Open Access Journals (Sweden)

    MADANI Mohammed

    2017-10-01

    Full Text Available In this paper, a new satellite image encryption algorithm based on the combination of multiple chaotic systems and a random cyclic rotation technique is proposed. Our contribution consists in implementing three different chaotic maps (logistic, sine, and standard combined to improve the security of satellite images. Besides enhancing the encryption, the proposed algorithm also focuses on advanced efficiency of the ciphered images. Compared with classical encryption schemes based on multiple chaotic maps and the Rubik's cube rotation, our approach has not only the same merits of chaos systems like high sensitivity to initial values, unpredictability, and pseudo-randomness, but also other advantages like a higher number of permutations, better performances in Peak Signal to Noise Ratio (PSNR and a Maximum Deviation (MD.

  20. Improved Asymmetric Cipher Based on Matrix Power Function with Provable Security

    Directory of Open Access Journals (Sweden)

    Eligijus Sakalauskas

    2017-01-01

    Full Text Available The improved version of the author’s previously declared asymmetric cipher protocol based on matrix power function (MPF is presented. Proposed modification avoids discrete logarithm attack (DLA which could be applied to the previously declared protocol. This attack allows us to transform the initial system of MPF equations to so-called matrix multivariate quadratic (MMQ system of equations, which is a system representing a subclass of multivariate quadratic (MQ systems of equations. We are making a conjecture that avoidance of DLA in protocol, presented here, should increase its security, since an attempt to solve the initial system of MPF equations would appear to be no less complex than solving the system of MMQ equations. No algorithms are known to solve such a system of equations. Security parameters and their secure values are defined. Security analysis against chosen plaintext attack (CPA and chosen ciphertext attack (CCA is presented. Measures taken to prevent DLA attack increase the security of this protocol with respect to the previously declated protocol.

  1. Coded aperture imaging and the introduction of the modulated zone plate in nuclear medicine

    International Nuclear Information System (INIS)

    Berg, C.J.M. van den

    1976-01-01

    Imaging radioactive distributions is an elementary problem in nuclear medicine. There are no media with refracting properties large enough to obtain a gamma lens. At this moment the images in nuclear medicine are produced with help of collimators. The disadvantages of the use of collimators are: limited resolution; low efficiency; only a small fraction of the total of the emitted radiation is detected; without special techniques a collimator cannot produce tomographic images. Recent developments of coded aperture imaging are trying to meet these disadvantages. One of the coded apertures is the Fresnel Zone Plate. In order to understand its use some of its optical properties are briefly discussed

  2. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  3. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  4. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  5. Quantum image pseudocolor coding based on the density-stratified method

    Science.gov (United States)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  6. Algorithm for image retrieval based on edge gradient orientation statistical code.

    Science.gov (United States)

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  7. Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.

    Science.gov (United States)

    Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven

    2014-11-01

    Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without

  8. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    Science.gov (United States)

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  9. Displacement measurement with nanoscale resolution using a coded micro-mark and digital image correlation

    Science.gov (United States)

    Huang, Wei; Ma, Chengfu; Chen, Yuhang

    2014-12-01

    A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.

  10. Context-based coding of bilevel images enhanced by digital straight line analysis

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    , or segmentation maps are also encoded efficiently. The algorithm is not targeted at document images with text, which can be coded efficiently with dictionary-based techniques as in JBIG2. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used...... in the context definition for arithmetic encoding. Tested on individual images of standard TV resolution binary shapes and the binary layers of a digital map, the proposed algorithm outperforms PWC, JBIG, JBIG2, and MPEG-4 CAE. On the binary shapes, the code lengths are reduced by 21%, 27 %, 28 %, and 41...

  11. Coded aperture imaging: the modulation transfer function for uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1980-01-01

    Coded aperture imaging uses many pinholes to increase the SNR for intrinsically weak sources when the radiation can be neither reflected nor refracted. Effectively, the signal is multiplexed onto an image and then decoded, often by a computer, to form a reconstructed image. We derive the modulation transfer function (MTF) of such a system employing uniformly redundant arrays (URA). We show that the MTF of a URA system is virtually the same as the MTF of an individual pinhole regardless of the shape or size of the pinhole. Thus, only the location of the pinholes is important for optimum multiplexing and decoding. The shape and size of the pinholes can then be selected based on other criteria. For example, one can generate self-supporting patterns, useful for energies typically encountered in the imaging of laser-driven compressions or in soft x-ray astronomy. Such patterns contain holes that are all the same size, easing the etching or plating fabrication efforts for the apertures. A new reconstruction method is introduced called delta decoding. It improves the resolution capabilities of a coded aperture system by mitigating a blur often introduced during the reconstruction step

  12. HD Photo: a new image coding technology for digital photography

    Science.gov (United States)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  13. A Fast Enhanced Secure Image Chaotic Cryptosystem Based on Hybrid Chaotic Magic Transform

    Directory of Open Access Journals (Sweden)

    Srinivas Koppu

    2017-01-01

    Full Text Available An enhanced secure image chaotic cryptosystem has been proposed based on hybrid CMT-Lanczos algorithm. We have achieved fast encryption and decryption along with privacy of images. The pseudorandom generator has been used along with Lanczos algorithm to generate root characteristics and eigenvectors. Using hybrid CMT image, pixels are shuffled to accomplish excellent randomness. Compared with existing methods, the proposed method had more robustness to various attacks: brute-force attack, known cipher plaintext, chosen-plaintext, security key space, key sensitivity, correlation analysis and information entropy, and differential attacks. Simulation results show that the proposed methods give better result in protecting images with low-time complexity.

  14. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    Science.gov (United States)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  15. An imaging method of wavefront coding system based on phase plate rotation

    Science.gov (United States)

    Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2018-01-01

    Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

  16. Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm

    Science.gov (United States)

    Hardi, S. M.; Tarigan, J. T.; Safrina, N.

    2018-03-01

    In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.

  17. Breaking teleprinter ciphers at Bletchley Park an edition of I.J. Good, D. Michie and G. Timms : general report on tunny with emphasis on statistical methods (1945)

    CERN Document Server

    Diffie, W; Field, J

    2015-01-01

    This detailed technical account of breaking Tunny is an edition of a report written in 1945, with extensive modern commentary Breaking Teleprinter Ciphers at Bletchley Park gives the full text of the General Report on Tunny (GRT) of 1945, making clear how the ideas, notation and the specially designed machines that were used differ from what was generally accepted in 1945, and, where a modern reader might be misled, from what is understood now. The editors of this book clarify the sometimes slightly strange language of the GRT and explain the text within a variety of contexts in several separate historical story lines, some only implicit in the GRT itself. The first story, told by the authors of the GRT, describes how, using specially designed machines, including from 1944 the "Colossus", the British broke the enciphered teleprinter messages sent by the highest command levels of the Germany Army. The cipher machines the Germans used were the Loren SZ 40 series, called "Tunny" by the British. The second stor...

  18. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  19. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  20. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  1. Dual-camera design for coded aperture snapshot spectral imaging.

    Science.gov (United States)

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  2. Image Quality Assessment via Quality-aware Group Sparse Coding

    Directory of Open Access Journals (Sweden)

    Minglei Tong

    2014-12-01

    Full Text Available Image quality assessment has been attracting growing attention at an accelerated pace over the past decade, in the fields of image processing, vision and machine learning. In particular, general purpose blind image quality assessment is technically challenging and lots of state-of-the-art approaches have been developed to solve this problem, most under the supervised learning framework where the human scored samples are needed for training a regression model. In this paper, we propose an unsupervised learning approach that work without the human label. In the off-line stage, our method trains a dictionary covering different levels of image quality patch atoms across the training samples without knowing the human score, where each atom is associated with a quality score induced from the reference image; at the on-line stage, given each image patch, our method performs group sparse coding to encode the sample, such that the sample quality can be estimated from the few labeled atoms whose encoding coefficients are nonzero. Experimental results on the public dataset show the promising performance of our approach and future research direction is also discussed.

  3. Image enhancement using MCNP5 code and MATLAB in neutron radiography.

    Science.gov (United States)

    Tharwat, Montaser; Mohamed, Nader; Mongy, T

    2014-07-01

    This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. A Comparative Study on Diagnostic Accuracy of Colour Coded Digital Images, Direct Digital Images and Conventional Radiographs for Periapical Lesions – An In Vitro Study

    Science.gov (United States)

    Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir

    2014-01-01

    Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318

  5. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    Science.gov (United States)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  6. Stand-alone front-end system for high- frequency, high-frame-rate coded excitation ultrasonic imaging.

    Science.gov (United States)

    Park, Jinhyoung; Hu, Changhong; Shung, K Kirk

    2011-12-01

    A stand-alone front-end system for high-frequency coded excitation imaging was implemented to achieve a wider dynamic range. The system included an arbitrary waveform amplifier, an arbitrary waveform generator, an analog receiver, a motor position interpreter, a motor controller and power supplies. The digitized arbitrary waveforms at a sampling rate of 150 MHz could be programmed and converted to an analog signal. The pulse was subsequently amplified to excite an ultrasound transducer, and the maximum output voltage level achieved was 120 V(pp). The bandwidth of the arbitrary waveform amplifier was from 1 to 70 MHz. The noise figure of the preamplifier was less than 7.7 dB and the bandwidth was 95 MHz. Phantoms and biological tissues were imaged at a frame rate as high as 68 frames per second (fps) to evaluate the performance of the system. During the measurement, 40-MHz lithium niobate (LiNbO(3)) single-element lightweight (<;0.28 g) transducers were utilized. The wire target measure- ment showed that the -6-dB axial resolution of a chirp-coded excitation was 50 μm and lateral resolution was 120 μm. The echo signal-to-noise ratios were found to be 54 and 65 dB for the short burst and coded excitation, respectively. The contrast resolution in a sphere phantom study was estimated to be 24 dB for the chirp-coded excitation and 15 dB for the short burst modes. In an in vivo study, zebrafish and mouse hearts were imaged. Boundaries of the zebrafish heart in the image could be differentiated because of the low-noise operation of the implemented system. In mouse heart images, valves and chambers could be readily visualized with the coded excitation.

  7. Biometric iris image acquisition system with wavefront coding technology

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  8. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  9. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  10. Coherent diffractive imaging using randomly coded masks

    Energy Technology Data Exchange (ETDEWEB)

    Seaberg, Matthew H., E-mail: seaberg@slac.stanford.edu [CNRS and D.I., UMR 8548, École Normale Supérieure, 45 Rue d' Ulm, 75005 Paris (France); Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); D' Aspremont, Alexandre [CNRS and D.I., UMR 8548, École Normale Supérieure, 45 Rue d' Ulm, 75005 Paris (France); Turner, Joshua J. [Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States)

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.

  11. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  12. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was

  13. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    Science.gov (United States)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-09-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.

  14. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    International Nuclear Information System (INIS)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-01-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method. (paper)

  15. Image Encryption Scheme Based on Balanced Two-Dimensional Cellular Automata

    Directory of Open Access Journals (Sweden)

    Xiaoyan Zhang

    2013-01-01

    Full Text Available Cellular automata (CA are simple models of computation which exhibit fascinatingly complex behavior. Due to the universality of CA model, it has been widely applied in traditional cryptography and image processing. The aim of this paper is to present a new image encryption scheme based on balanced two-dimensional cellular automata. In this scheme, a random image with the same size of the plain image to be encrypted is first generated by a pseudo-random number generator with a seed. Then, the random image is evoluted alternately with two balanced two-dimensional CA rules. At last, the cipher image is obtained by operating bitwise XOR on the final evolution image and the plain image. This proposed scheme possesses some advantages such as very large key space, high randomness, complex cryptographic structure, and pretty fast encryption/decryption speed. Simulation results obtained from some classical images at the USC-SIPI database demonstrate the strong performance of the proposed image encryption scheme.

  16. Applied algebra codes, ciphers and discrete algorithms

    CERN Document Server

    Hardy, Darel W; Walker, Carol L

    2009-01-01

    This book attempts to show the power of algebra in a relatively simple setting.-Mathematical Reviews, 2010… The book supports learning by doing. In each section we can find many examples which clarify the mathematics introduced in the section and each section is followed by a series of exercises of which approximately half are solved in the end of the book. Additional the book comes with a CD-ROM containing an interactive version of the book powered by the computer algebra system Scientific Notebook. … the mathematics in the book are developed as needed and the focus of the book lies clearly o

  17. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    Science.gov (United States)

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  18. Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm

    Science.gov (United States)

    Moumen, Abdelkader; Sissaoui, Hocine

    2017-03-01

    Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.

  19. Practical security analysis of a quantum stream cipher by the Yuen 2000 protocol

    International Nuclear Information System (INIS)

    Hirota, Osamu

    2007-01-01

    There exists a great gap between one-time pad with perfect secrecy and conventional mathematical encryption. The Yuen 2000 (Y00) protocol or αη scheme may provide a protocol which covers from the conventional security to the ultimate one, depending on implementations. This paper presents the complexity-theoretic security analysis on some models of the Y00 protocol with nonlinear pseudo-random-number-generator and quantum noise diffusion mapping (QDM). Algebraic attacks and fast correlation attacks are applied with a model of the Y00 protocol with nonlinear filtering like the Toyocrypt stream cipher as the running key generator, and it is shown that these attacks in principle do not work on such models even when the mapping between running key and quantum state signal is fixed. In addition, a security property of the Y00 protocol with QDM is clarified. Consequently, we show that the Y00 protocol has a potential which cannot be realized by conventional cryptography and that it goes beyond mathematical encryption with physical encryption

  20. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  1. Coded Aperture Nuclear Scintigraphy: A Novel Small Animal Imaging Technique

    Directory of Open Access Journals (Sweden)

    Dawid Schellingerhout

    2002-10-01

    Full Text Available We introduce and demonstrate the utility of coded aperture (CA nuclear scintigraphy for imaging small animals. CA imaging uses multiple pinholes in a carefully designed mask pattern, mounted on a conventional gamma camera. System performance was assessed using point sources and phantoms, while several animal experiments were performed to test the usefulness of the imaging system in vivo, with commonly used radiopharmaceuticals. The sensitivity of the CA system for 99mTc was 4.2 × 103 cps/Bq (9400 cpm/μCi, compared to 4.4 × 104 cps/Bq (990 cpm/μCi for a conventional collimator system. The system resolution was 1.7 mm, as compared to 4–6 mm for the conventional imaging system (using a high-sensitivity low-energy collimator. Animal imaging demonstrated artifact-free imaging with superior resolution and image quality compared to conventional collimator images in several mouse and rat models. We conclude that: (a CA imaging is a useful nuclear imaging technique for small animal imaging. The advantage in signal-to-noise can be traded to achieve higher resolution, decreased dose or reduced imaging time. (b CA imaging works best for images where activity is concentrated in small volumes; a low count outline may be better demonstrated using conventional collimator imaging. Thus, CA imaging should be viewed as a technique to complement rather than replace traditional nuclear imaging methods. (c CA hardware and software can be readily adapted to existing gamma cameras, making their implementation a relatively inexpensive retrofit to most systems.

  2. Learning Joint-Sparse Codes for Calibration-Free Parallel MR Imaging.

    Science.gov (United States)

    Wang, Shanshan; Tan, Sha; Gao, Yuan; Liu, Qiegen; Ying, Leslie; Xiao, Taohui; Liu, Yuanyuan; Liu, Xin; Zheng, Hairong; Liang, Dong

    2018-01-01

    The integration of compressed sensing and parallel imaging (CS-PI) has shown an increased popularity in recent years to accelerate magnetic resonance (MR) imaging. Among them, calibration-free techniques have presented encouraging performances due to its capability in robustly handling the sensitivity information. Unfortunately, existing calibration-free methods have only explored joint-sparsity with direct analysis transform projections. To further exploit joint-sparsity and improve reconstruction accuracy, this paper proposes to Learn joINt-sparse coDes for caliBration-free parallEl mR imaGing (LINDBERG) by modeling the parallel MR imaging problem as an - - minimization objective with an norm constraining data fidelity, Frobenius norm enforcing sparse representation error and the mixed norm triggering joint sparsity across multichannels. A corresponding algorithm has been developed to alternatively update the sparse representation, sensitivity encoded images and K-space data. Then, the final image is produced as the square root of sum of squares of all channel images. Experimental results on both physical phantom and in vivo data sets show that the proposed method is comparable and even superior to state-of-the-art CS-PI reconstruction approaches. Specifically, LINDBERG has presented strong capability in suppressing noise and artifacts while reconstructing MR images from highly undersampled multichannel measurements.

  3. Encryption and watermark-treated medical image against hacking disease-An immune convention in spatial and frequency domains.

    Science.gov (United States)

    Lakshmi, C; Thenmozhi, K; Rayappan, John Bosco Balaguru; Amirtharajan, Rengarajan

    2018-06-01

    Digital Imaging and Communications in Medicine (DICOM) is one among the significant formats used worldwide for the representation of medical images. Undoubtedly, medical-image security plays a crucial role in telemedicine applications. Merging encryption and watermarking in medical-image protection paves the way for enhancing the authentication and safer transmission over open channels. In this context, the present work on DICOM image encryption has employed a fuzzy chaotic map for encryption and the Discrete Wavelet Transform (DWT) for watermarking. The proposed approach overcomes the limitation of the Arnold transform-one of the most utilised confusion mechanisms in image ciphering. Various metrics have substantiated the effectiveness of the proposed medical-image encryption algorithm. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Image enhancement using MCNP5 code and MATLAB in neutron radiography

    International Nuclear Information System (INIS)

    Tharwat, Montaser; Mohamed, Nader; Mongy, T.

    2014-01-01

    This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. - Highlights: • This work is applicable for static based film neutron radiography and digital neutron imaging. • MATLAB is a useful tool for imaging enhancement in radiographic film. • Advanced imaging processing is available in the ETRR-2 for imaging processing and data extraction. • The digital imaging system is suitable for complex shapes and sizes, while MATLAB technique is suitable for simple shapes and sizes. • Quantitative measurements are available

  5. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    Science.gov (United States)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  6. Colour-coded fractional anisotropy images: differential visualisation of white-matter tracts - preliminary experience

    International Nuclear Information System (INIS)

    Murata, T.; Higano, S.; Tamura, H.; Mugikura, S.; Takahashi, S.

    2002-01-01

    Diffusion-tensor analysis allows quantitative assessment of diffusion anisotropy. Fractional anisotropy (FA) is commonly used to quantify anisotropy. One of the limitations of FA imaging is, however, that it does not contain information about the directionality of anisotropy and it is therefore difficult to identify white-matter tracts on FA images. Our purpose was to describe a simple method of making composite images containing information about both magnitude and direction of diffusion anisotropy. The composite colour-coded FA images enabled us to identify different adjacent fibre bundles of similar degrees of diffusion anisotropy, and might be helpful in assessment of these fasciculi. (orig.)

  7. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Yuping Hu

    2014-01-01

    Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  8. Improving the Calibration of Image Sensors Based on IOFBs, Using Differential Gray-Code Space Encoding

    Directory of Open Access Journals (Sweden)

    Carlos Luna Vázquez

    2012-07-01

    Full Text Available This paper presents a fast calibration method to determine the transfer function for spatial correspondences in image transmission devices with Incoherent Optical Fiber Bundles (IOFBs, by performing a scan of the input, using differential patterns generated from a Gray code (Differential Gray-Code Space Encoding, DGSE. The results demonstrate that this technique provides a noticeable reduction in processing time and better quality of the reconstructed image compared to other, previously employed techniques, such as point or fringe scanning, or even other known space encoding techniques.

  9. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    Science.gov (United States)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  10. Color-coded MR imaging phase velocity mapping with the Pixar image processor

    International Nuclear Information System (INIS)

    Singleton, H.R.; Cranney, G.B.; Pohost, G.M.

    1989-01-01

    The authors have developed a graphic interaction technique in which a mouse and cursor are used to assign colors to phase-sensitive MR images of the heart. Two colors are used, one for flow in the positive direction, another for flow in the negative direction. A lookup table is generated interactively by manipulating lines representing ramps superimposed on an intensity histogram. Intensity is made to vary with flow magnitude in each color's direction. Coded series of the ascending and descending aorta, and of two- and four-chamber views of the heart, have been generated. In conjunction with movie display, flow dynamics, especially changes in direction, are readily apparent

  11. Simultaneous transmission for an encrypted image and a double random-phase encryption key

    Science.gov (United States)

    Yuan, Sheng; Zhou, Xin; Li, Da-Hai; Zhou, Ding-Fu

    2007-06-01

    We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.

  12. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  13. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    Science.gov (United States)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  14. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  15. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  16. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    Science.gov (United States)

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  17. Phase-coded multi-pulse technique for ultrasonic high-order harmonic imaging of biological tissues in vitro

    International Nuclear Information System (INIS)

    Ma Qingyu; Zhang Dong; Gong Xiufen; Ma Yong

    2007-01-01

    Second or higher order harmonic imaging shows significant improvement in image clarity but is degraded by low signal-noise ratio (SNR) compared with fundamental imaging. This paper presents a phase-coded multi-pulse technique to provide the enhancement of SNR for the desired high-order harmonic ultrasonic imaging. In this technique, with N phase-coded pulses excitation, the received Nth harmonic signal is enhanced by 20 log 10 N dB compared with that in the single-pulse mode, whereas the fundamental and other order harmonic components are efficiently suppressed to reduce image confusion. The principle of this technique is theoretically discussed based on the theory of the finite amplitude sound waves, and examined by measurements of the axial and lateral beam profiles as well as the phase shift of the harmonics. In the experimental imaging for two biological tissue specimens, a plane piston source at 2 MHz is used to transmit a sequence of multiple pulses with equidistant phase shift. The second to fifth harmonic images are obtained using this technique with N = 2 to 5, and compared with the images obtained at the fundamental frequency. Results demonstrate that this technique of relying on higher order harmonics seems to provide a better resolution and contrast of ultrasonic images

  18. A super-high angular resolution principle for coded-mask X-ray imaging beyond the diffraction limit of a single pinhole

    International Nuclear Information System (INIS)

    Zhang Chen; Zhang Shuangnan

    2009-01-01

    High angular resolution X-ray imaging is always useful in astrophysics and solar physics. In principle, it can be performed by using coded-mask imaging with a very long mask-detector distance. Previously, the diffraction-interference effect was thought to degrade coded-mask imaging performance dramatically at the low energy end with its very long mask-detector distance. The diffraction-interference effect is described with numerical calculations, and the diffraction-interference cross correlation reconstruction method (DICC) is developed in order to overcome the imaging performance degradation. Based on the DICC, a super-high angular resolution principle (SHARP) for coded-mask X-ray imaging is proposed. The feasibility of coded mask imaging beyond the diffraction limit of a single pinhole is demonstrated with simulations. With the specification that the mask element size is 50 x 50 μm 2 and the mask-detector distance is 50 m, the achieved angular resolution is 0.32 arcsec above about 10 keV and 0.36 arcsec at 1.24 keV (λ = 1 nm), where diffraction cannot be neglected. The on-axis source location accuracy is better than 0.02 arcsec. Potential applications for solar observations and wide-field X-ray monitors are also briefly discussed. (invited reviews)

  19. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Matched Filtering and Convolutional Neural Network.

    Science.gov (United States)

    Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-04-26

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  20. Image Encryption Technology Based on Fractional Two-Dimensional Triangle Function Combination Discrete Chaotic Map Coupled with Menezes-Vanstone Elliptic Curve Cryptosystem

    Directory of Open Access Journals (Sweden)

    Zeyu Liu

    2018-01-01

    Full Text Available A new fractional two-dimensional triangle function combination discrete chaotic map (2D-TFCDM with the discrete fractional difference is proposed. We observe the bifurcation behaviors and draw the bifurcation diagrams, the largest Lyapunov exponent plot, and the phase portraits of the proposed map, respectively. On the application side, we apply the proposed discrete fractional map into image encryption with the secret keys ciphered by Menezes-Vanstone Elliptic Curve Cryptosystem (MVECC. Finally, the image encryption algorithm is analysed in four main aspects that indicate the proposed algorithm is better than others.

  1. Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling

    OpenAIRE

    Tong, Tong; Wolz, Robin; Coupe, Pierrick; Hajnal, Joseph V.; Rueckert, Daniel

    2013-01-01

    International audience; We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labe...

  2. An X-ray imager based on silicon microstrip detector and coded mask

    International Nuclear Information System (INIS)

    Del Monte, E.; Costa, E.; Di Persio, G.; Donnarumma, I.; Evangelista, Y.; Feroci, M.; Frutti, M.; Lapshov, I.; Lazzarotto, F.; Mastropietro, M.; Morelli, E.; Pacciani, L.; Porrovecchio, G.; Rapisarda, M.; Rubini, A.; Soffitta, P.; Tavani, M.; Argan, A.

    2007-01-01

    SuperAGILE is the X-ray monitor of AGILE, a satellite mission for gamma-ray astronomy, and it is the first X-ray imaging instrument based on the technology of the silicon microstrip detectors combined with a coded aperture imaging technique. The SuperAGILE detection plane is composed of four 1-D silicon microstrip detector modules, mechanically coupled to tungsten coded mask units. The detector strips are separately and individually connected to the input analogue channels of the front-end electronics, composed of low-noise and low-power consumption VLSI ASIC chips. SuperAGILE can produce 1-D images with 6 arcmin angular resolution and ∼2-3 arcmin localisation capability, for intense sources, in a field of view composed of two orthogonal areas of 107 deg. x 68 deg. The time resolution is 2 μs, the overall dead time is ∼5 μs and the electronic noise is ∼7.5 keV full-width at half-maximum. The resulting instrument is very compact (40x40x14 cm 3 ), light (10 kg) and has low power consumption (12 W). AGILE is a mission of the Agenzia Spaziale Italiana and its launch is planned in 2007 in a low equatorial Earth orbit. In this contribution we present SuperAGILE and discuss its performance and scientific objectives

  3. Combining multi-pulse excitation and chirp coding in contrast-enhanced ultrasound imaging

    International Nuclear Information System (INIS)

    Crocco, M; Sciallero, C; Trucco, A; Pellegretti, P

    2009-01-01

    The development of techniques to separate the response of the contrast agent from that of the biological tissues is of great importance in ultrasound medical imaging. In the literature, one can find various solutions involving the use of multiple transmitted signals and the weighted sum of related echoes. In this paper, the combination of one of these multi-pulse techniques with a coded excitation is proposed and assessed. Coded excitation has been used mainly to increase the signal-to-noise ratio (SNR) and the penetration depth, provided that a matched filtering is applied in the reception chain. However, it has been shown that a signal with a long duration time also increases the backscattered echoes produced by the microbubbles and, consequently, the contrast-to-tissue ratio. Therefore, the implementation of a multi-pulse technique using a long coded pulse can yield a better contrast-to-tissue ratio and SNR. This paper investigates the combination of the linear chirp pulse with a multi-pulse technique based on the transmission of three pulses. The performance was evaluated using both simulated and real signals, assessing the improvement in the contrast-to-tissue ratio and SNR, the visual quality of the images obtained and the axial accuracy. A comparison with the same multi-pulse technique implemented using a traditional amplitude-modulated pulse revealed that the deployment of a chirp pulse produces several noticeable advantages and only a minor drawback

  4. OCML-based colour image encryption

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Meherzi, Soumaya; Belghith, Safya

    2009-01-01

    The chaos-based cryptographic algorithms have suggested some new ways to develop efficient image-encryption schemes. While most of these schemes are based on low-dimensional chaotic maps, it has been proposed recently to use high-dimensional chaos namely spatiotemporal chaos, which is modelled by one-way coupled-map lattices (OCML). Owing to their hyperchaotic behaviour, such systems are assumed to enhance the cryptosystem security. In this paper, we propose an OCML-based colour image encryption scheme with a stream cipher structure. We use a 192-bit-long external key to generate the initial conditions and the parameters of the OCML. We have made several tests to check the security of the proposed cryptosystem namely, statistical tests including histogram analysis, calculus of the correlation coefficients of adjacent pixels, security test against differential attack including calculus of the number of pixel change rate (NPCR) and unified average changing intensity (UACI), and entropy calculus. The cryptosystem speed is analyzed and tested as well.

  5. A novel quantum LSB-based steganography method using the Gray code for colored quantum images

    Science.gov (United States)

    Heidari, Shahrokh; Farzadnia, Ehsan

    2017-10-01

    As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.

  6. Physical-layer security analysis of PSK quantum-noise randomized cipher in optically amplified links

    Science.gov (United States)

    Jiao, Haisong; Pu, Tao; Xiang, Peng; Zheng, Jilin; Fang, Tao; Zhu, Huatao

    2017-08-01

    The quantitative security of quantum-noise randomized cipher (QNRC) in optically amplified links is analyzed from the perspective of physical-layer advantage. Establishing the wire-tap channel models for both key and data, we derive the general expressions of secrecy capacities for the key against ciphertext-only attack and known-plaintext attack, and that for the data, which serve as the basic performance metrics. Further, the maximal achievable secrecy rate of the system is proposed, under which secrecy of both the key and data is guaranteed. Based on the same framework, the secrecy capacities of various cases can be assessed and compared. The results indicate perfect secrecy is potentially achievable for data transmission, and an elementary principle of setting proper number of photons and bases is given to ensure the maximal data secrecy capacity. But the key security is asymptotically perfect, which tends to be the main constraint of systemic maximal secrecy rate. Moreover, by adopting cascaded optical amplification, QNRC can realize long-haul transmission with secure rate up to Gb/s, which is orders of magnitude higher than the perfect secrecy rates of other encryption systems.

  7. Content layer progressive coding of digital maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2000-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....

  8. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    Science.gov (United States)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  9. Real-time generation of images with pixel-by-pixel spectra for a coded aperture imager with high spectral resolution

    International Nuclear Information System (INIS)

    Ziock, K.P.; Burks, M.T.; Craig, W.; Fabris, L.; Hull, E.L.; Madden, N.W.

    2003-01-01

    The capabilities of a coded aperture imager are significantly enhanced when a detector with excellent energy resolution is used. We are constructing such an imager with a 1.1 cm thick, crossed-strip, planar detector which has 38 strips of 2 mm pitch in each dimension followed by a large coaxial detector. Full value from this system is obtained only when the images are 'fully deconvolved' meaning that the energy spectrum is available from each pixel in the image. The large number of energy bins associated with the spectral resolution of the detector, and the fixed pixel size, present significant computational challenges in generating an image in a timely manner at the conclusion of a data acquisition. The long computation times currently preclude the generation of intermediate images during the acquisition itself. We have solved this problem by building the images on-line as each event comes in using pre-imaged arrays of the system response. The generation of these arrays and the use of fractional mask-to-detector pixel sampling is discussed

  10. Imaging and image restoration of an on-axis three-mirror Cassegrain system with wavefront coding technology.

    Science.gov (United States)

    Guo, Xiaohu; Dong, Liquan; Zhao, Yuejin; Jia, Wei; Kong, Lingqin; Wu, Yijian; Li, Bing

    2015-04-01

    Wavefront coding (WFC) technology is adopted in the space optical system to resolve the problem of defocus caused by temperature difference or vibration of satellite motion. According to the theory of WFC, we calculate and optimize the phase mask parameter of the cubic phase mask plate, which is used in an on-axis three-mirror Cassegrain (TMC) telescope system. The simulation analysis and the experimental results indicate that the defocused modulation transfer function curves and the corresponding blurred images have a perfect consistency in the range of 10 times the depth of focus (DOF) of the original TMC system. After digital image processing by a Wiener filter, the spatial resolution of the restored images is up to 57.14 line pairs/mm. The results demonstrate that the WFC technology in the TMC system has superior performance in extending the DOF and less sensitivity to defocus, which has great value in resolving the problem of defocus in the space optical system.

  11. Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations

    International Nuclear Information System (INIS)

    Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu; Ingleby, Harry R.

    2011-01-01

    Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, single Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since

  12. A novel algorithm for thermal image encryption.

    Science.gov (United States)

    Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen

    2018-04-16

    Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.

  13. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics

    Czech Academy of Sciences Publication Activity Database

    Dos Santos, S.; Převorovský, Zdeněk

    2011-01-01

    Roč. 51, č. 6 (2011), s. 667-674 ISSN 0041-624X Institutional research plan: CEZ:AV0Z20760514 Keywords : TR-NEWS * chirp-coded excitation * echodentography * ultrasonic imaging Subject RIV: BI - Acoustics Impact factor: 1.838, year: 2011 http://www.sciencedirect.com/science/article/pii/S0041624X11000229

  14. Quantum Codes From Negacyclic Codes over Group Ring ( Fq + υFq) G

    International Nuclear Information System (INIS)

    Koroglu, Mehmet E.; Siap, Irfan

    2016-01-01

    In this paper, we determine self dual and self orthogonal codes arising from negacyclic codes over the group ring ( F q + υF q ) G . By taking a suitable Gray image of these codes we obtain many good parameter quantum error-correcting codes over F q . (paper)

  15. Feature coding for image representation and recognition

    CERN Document Server

    Huang, Yongzhen

    2015-01-01

    This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D

  16. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  17. Content Layer progressive Coding of Digital Maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2002-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the World Wide Web. Progressive encoding is achieved by encoding the image in content layers based on color level or other predefined...... information. Information from already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bilevel coding, context collapsing methods for multilevel images and arithmetic coding. Relative pixel patterns are used to collapse contexts. Expressions...... for calculating the resulting number of contexts are given. The new methods outperform existing schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 50-70% on our layered map test images....

  18. Approximated transport-of-intensity equation for coded-aperture x-ray phase-contrast imaging.

    Science.gov (United States)

    Das, Mini; Liang, Zhihua

    2014-09-15

    Transport-of-intensity equations (TIEs) allow better understanding of image formation and assist in simplifying the "phase problem" associated with phase-sensitive x-ray measurements. In this Letter, we present for the first time to our knowledge a simplified form of TIE that models x-ray differential phase-contrast (DPC) imaging with coded-aperture (CA) geometry. The validity of our approximation is demonstrated through comparison with an exact TIE in numerical simulations. The relative contributions of absorption, phase, and differential phase to the acquired phase-sensitive intensity images are made readily apparent with the approximate TIE, which may prove useful for solving the inverse phase-retrieval problem associated with these CA geometry based DPC.

  19. Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding

    Directory of Open Access Journals (Sweden)

    Mailhes Corinne

    2007-01-01

    Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.

  20. Deep Constrained Siamese Hash Coding Network and Load-Balanced Locality-Sensitive Hashing for Near Duplicate Image Detection.

    Science.gov (United States)

    Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen

    2018-09-01

    We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.

  1. Improvement of QR Code Recognition Based on Pillbox Filter Analysis

    Directory of Open Access Journals (Sweden)

    Jia-Shing Sheu

    2013-04-01

    Full Text Available The objective of this paper is to perform the innovation design for improving the recognition of a captured QR code image with blur through the Pillbox filter analysis. QR code images can be captured by digital video cameras. Many factors contribute to QR code decoding failure, such as the low quality of the image. Focus is an important factor that affects the quality of the image. This study discusses the out-of-focus QR code image and aims to improve the recognition of the contents in the QR code image. Many studies have used the pillbox filter (circular averaging filter method to simulate an out-of-focus image. This method is also used in this investigation to improve the recognition of a captured QR code image. A blurred QR code image is separated into nine levels. In the experiment, four different quantitative approaches are used to reconstruct and decode an out-of-focus QR code image. These nine reconstructed QR code images using methods are then compared. The final experimental results indicate improvements in identification.

  2. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  3. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  4. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    International Nuclear Information System (INIS)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  5. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  6. Secure Image Steganography Algorithm Based on DCT with OTP Encryption

    Directory of Open Access Journals (Sweden)

    De Rosal Ignatius Moses Setiadi

    2017-04-01

    Full Text Available Rapid development of Internet makes transactions message even easier and faster. The main problem in the transactions message is security, especially if the message is private and secret. To secure these messages is usually done with steganography or cryptography. Steganography is a way to hide messages into other digital content such as images, video or audio so it does not seem nondescript from the outside. While cryptography is a technique to encrypt messages so that messages can not be read directly. In this paper have proposed combination of steganography using discrete cosine transform (DCT and cryptography using the one-time pad or vernam cipher implemented on a digital image. The measurement method used to determine the quality of stego image is the peak signal to noise ratio (PSNR and ormalize cross Correlation (NCC to measure the quality of the extraction of the decrypted message. Of steganography and encryption methods proposed obtained satisfactory results with PSNR and NCC high and resistant to JPEG compression and median filter. Keywords—Image Steganography, Discrete Cosine Transform (DCT, One Time Pad, Vernam, Chiper, Image Cryptography

  7. A fast image encryption system based on chaotic maps with finite precision representation

    International Nuclear Information System (INIS)

    Kwok, H.S.; Tang, Wallace K.S.

    2007-01-01

    In this paper, a fast chaos-based image encryption system with stream cipher structure is proposed. In order to achieve a fast throughput and facilitate hardware realization, 32-bit precision representation with fixed point arithmetic is assumed. The major core of the encryption system is a pseudo-random keystream generator based on a cascade of chaotic maps, serving the purpose of sequence generation and random mixing. Unlike the other existing chaos-based pseudo-random number generators, the proposed keystream generator not only achieves a very fast throughput, but also passes the statistical tests of up-to-date test suite even under quantization. The overall design of the image encryption system is to be explained while detail cryptanalysis is given and compared with some existing schemes

  8. Novel 2D-sequential color code system employing Image Sensor Communications for Optical Wireless Communications

    Directory of Open Access Journals (Sweden)

    Trang Nguyen

    2016-06-01

    Full Text Available The IEEE 802.15.7r1 Optical Wireless Communications Task Group (TG7r1, also known as the revision of the IEEE 802.15.7 Visible Light Communication standard targeting the commercial usage of visible light communication systems, is of interest in this paper. The paper is mainly concerned with Image Sensor Communications (ISC of TG7r1; however, the major challenge facing ISC, as addressed in the Technical Consideration Document (TCD of TG7r1, is Image Sensor Compatibility among the variety of different commercial cameras on the market. One of the most challenging but interesting compatibility requirements is the need to support the verified presence of frame rate variation. This paper proposes a novel design for 2D-sequential color code. Compared to a QR-code-based sequential transmission, the proposed design of 2D-sequential code can overcome the above challenge that it is compatible with different frame rate variations and different shutter operations, and has the ability to mitigate the rolling effect as well as the rotating effect while effectively minimizing transmission overhead. Practical implementations are demonstrated and a performance comparison is presented.

  9. A novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism

    Science.gov (United States)

    Ye, Ruisong

    2011-10-01

    This paper proposes a novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism, in which permuting the positions of image pixels incorporates with changing the gray values of image pixels to confuse the relationship between cipher-image and plain-image. In the permutation process, a generalized Arnold map is utilized to generate one chaotic orbit used to get two index order sequences for the permutation of image pixel positions; in the diffusion process, a generalized Arnold map and a generalized Bernoulli shift map are employed to yield two pseudo-random gray value sequences for a two-way diffusion of gray values. The yielded gray value sequences are not only sensitive to the control parameters and initial conditions of the considered chaotic maps, but also strongly depend on the plain-image processed, therefore the proposed scheme can resist statistical attack, differential attack, known-plaintext as well as chosen-plaintext attack. Experimental results are carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack as well.

  10. Quantitative emission tomography by coded aperture imaging in nuclear medicine

    International Nuclear Information System (INIS)

    Guilhem, J.B.

    1982-06-01

    The coded aperture imaging is applied to nuclear medicine, since ten years. However no satisfactory clinical results have been obtained thus for. The reason is that digital reconstruction methods which have been implemented, in particular the method which use deconvolution filtering are not appropriate for quantification. Indeed these methods which all based on the assumption of shift invariance of the coding procedure, which is contradictory to the geometrical recording conditions giving the best depth resolution, do not take into account gamma rays attenuation by tissues and in most cases give tomograms with artefacts from blurred structures. A method is proposed which has not these limitations and considers the reconstruction problem as the ill-conditioned problem of solving a Fredholm integral equation. The main advantage of this method lies in fact that the transmission kernel of the integral equation is obtained experimentally, and the approximate solution of this equation, close enough to the original 3-D radioactive object, can be obtained in spite of the ill-conditioned nature of the problem, by use of singular values decomposition (S. V. D.) of the kernel [fr

  11. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  12. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  13. A novel image encryption scheme based on the ergodicity of baker map

    Science.gov (United States)

    Ye, Ruisong; Chen, Yonghong

    2012-01-01

    Thanks to the exceptionally good properties in chaotic systems, such as sensitivity to initial conditions and control parameters, pseudo-randomness and ergodicity, chaos-based image encryption algorithms have been widely studied and developed in recent years. A novel digital image encryption scheme based on the chaotic ergodicity of Baker map is proposed in this paper. Different from traditional encryption schemes based on Baker map, we permute the pixel positions by their corresponding order numbers deriving from the approximating points in one chaotic orbit. To enhance the resistance to statistical and differential attacks, a diffusion process is suggested as well in the proposed scheme. The proposed scheme enlarges the key space significantly to resist brute-force attack. Additionally, the distribution of gray values in the cipher-image has a random-like behavior to resist statistical analysis. The proposed scheme is robust against cropping, tampering and noising attacks as well. It therefore suggests a high secure and efficient way for real-time image encryption and transmission in practice.

  14. A study of the decoding of multiple pinhole coded aperture RI tomographic images

    International Nuclear Information System (INIS)

    Hasegawa, Takeo; Kobayashi, Akitoshi; Nishiyama, Yutaka

    1980-01-01

    The authors constructed a Multiple Pinhole Coded Aperture (MPCA) and developed related decoding software. When simple coordinate transformation was performed, omission of points and shifting of counts occurred. By selecting various tomographic planes and collecting count for each tomographic depth from the shadowgram, a solution to these problems was found. The counts from the central portion of the tomographic image from the MPCA were incorrectly high, this was rectified by a correction function to improve the uniformity correction program of the γ-camera. Depth resolution of the tomographic image improved in proportion to the area encompassed by the pinhole configuration. An MPCA with a uniform arrangement of pinholes (e, g, pinholes in an arrangement parallel to the X-axis or the Y-axis) yielded decoded tomographic images of inferior quality. Optimum results were obtained with a ring-shaped arrangement yielding clinically applicable tomographic images even for large objects. (author)

  15. Design of Spreading-Codes-Assisted Active Imaging System

    Directory of Open Access Journals (Sweden)

    Alexey Volkov

    2015-07-01

    Full Text Available This work discusses an innovative approach to imaging which can improve the robustness of existing active-range measurement methods and potentially enhance their use in a variety of outdoor applications. By merging a proven modulation technique from the domain of spread-spectrum communications with the bleeding-edge CMOS sensor technology, the prototype of the modulated range sensor is designed and evaluated. A suitable set of application-specific spreading codes is proposed, evaluated and tested on the prototype. Experimental results show that the introduced modulation technique significantly reduces the impacts of environmental factors such as sunlight and external light sources, as well as mutual interference of identical devices. The proposed approach can be considered as a promising basis for a new generation of robust and cost-efficient range-sensing solutions for automotive applications, autonomous vehicles or robots.

  16. Application of wavelets to image coding in an rf-link communication system

    Science.gov (United States)

    Liou, C. S. J.; Conners, Gary H.; Muczynski, Joe

    1995-04-01

    The joint University of Rochester/Rochester Institute of Technology `Center for Electronic Imaging Systems' (CEIS) is designed to focus on research problems of interest to industrial sponsors, especially the Rochester Imaging Consortium. Compression of tactical images for transmission over an rf link is an example of this type of research project which is being worked on in collaboration with one of the CEIS sponsors, Harris Corporation/RF Communications. The Harris digital video imagery transmission system (DVITS) is designed to fulfill the need to transmit secure imagery between unwired locations at real-time rates. DVITS specializes in transmission systems for users who rely on hf equipment operating at the low end of the frequency spectrum. However, the inherently low bandwidth of hf combined with transmission characteristics such as fading and dropout severely restrict the effective throughput. The problem at designing a system such as DVITS is particularly challenging because of bandwidth and signal/noise limitations, and because of the dynamic nature of the operational environment. In this paper, a novel application of wavelets in tactical image coding is proposed to replace the current DCT compression algorithm in the DVITS system. THe effects of channel noise on the received image are determined and various design strategies combining image segmentation, compression, and error correction are described.

  17. Coded aperture subreflector array for high resolution radar imaging

    Science.gov (United States)

    Lynch, Jonathan J.; Herrault, Florian; Kona, Keerti; Virbila, Gabriel; McGuire, Chuck; Wetzel, Mike; Fung, Helen; Prophet, Eric

    2017-05-01

    HRL Laboratories has been developing a new approach for high resolution radar imaging on stationary platforms. High angular resolution is achieved by operating at 235 GHz and using a scalable tile phased array architecture that has the potential to realize thousands of elements at an affordable cost. HRL utilizes aperture coding techniques to minimize the size and complexity of the RF electronics needed for beamforming, and wafer level fabrication and integration allow tiles containing 1024 elements to be manufactured with reasonable costs. This paper describes the results of an initial feasibility study for HRL's Coded Aperture Subreflector Array (CASA) approach for a 1024 element micromachined antenna array with integrated single-bit phase shifters. Two candidate electronic device technologies were evaluated over the 170 - 260 GHz range, GaN HEMT transistors and GaAs Schottky diodes. Array structures utilizing silicon micromachining and die bonding were evaluated for etch and alignment accuracy. Finally, the overall array efficiency was estimated to be about 37% (not including spillover losses) using full wave array simulations and measured device performance, which is a reasonable value at 235 GHz. Based on the measured data we selected GaN HEMT devices operated passively with 0V drain bias due to their extremely low DC power dissipation.

  18. A symmetric image encryption scheme based on 3D chaotic cat maps

    International Nuclear Information System (INIS)

    Chen Guanrong; Mao Yaobin; Chui, Charles K.

    2004-01-01

    Encryption of images is different from that of texts due to some intrinsic features of images such as bulk data capacity and high redundancy, which are generally difficult to handle by traditional methods. Due to the exceptionally desirable properties of mixing and sensitivity to initial conditions and parameters of chaotic maps, chaos-based encryption has suggested a new and efficient way to deal with the intractable problem of fast and highly secure image encryption. In this paper, the two-dimensional chaotic cat map is generalized to 3D for designing a real-time secure symmetric encryption scheme. This new scheme employs the 3D cat map to shuffle the positions (and, if desired, grey values as well) of image pixels and uses another chaotic map to confuse the relationship between the cipher-image and the plain-image, thereby significantly increasing the resistance to statistical and differential attacks. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security and fast encryption speed of the new scheme

  19. Real-time detection of natural objects using AM-coded spectral matching imager

    Science.gov (United States)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  20. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  1. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  2. LIDAR pulse coding for high resolution range imaging at improved refresh rate.

    Science.gov (United States)

    Kim, Gunzung; Park, Yongwan

    2016-10-17

    In this study, a light detection and ranging system (LIDAR) was designed that codes pixel location information in its laser pulses using the direct- sequence optical code division multiple access (DS-OCDMA) method in conjunction with a scanning-based microelectromechanical system (MEMS) mirror. This LIDAR can constantly measure the distance without idle listening time for the return of reflected waves because its laser pulses include pixel location information encoded by applying the DS-OCDMA. Therefore, this emits in each bearing direction without waiting for the reflected wave to return. The MEMS mirror is used to deflect and steer the coded laser pulses in the desired bearing direction. The receiver digitizes the received reflected pulses using a low-temperature-grown (LTG) indium gallium arsenide (InGaAs) based photoconductive antenna (PCA) and the time-to-digital converter (TDC) and demodulates them using the DS-OCDMA. When all of the reflected waves corresponding to the pixels forming a range image are received, the proposed LIDAR generates a point cloud based on the time-of-flight (ToF) of each reflected wave. The results of simulations performed on the proposed LIDAR are compared with simulations of existing LIDARs.

  3. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    Directory of Open Access Journals (Sweden)

    Jacquelyn A Shelton

    Full Text Available Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule, the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (nonlinear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.

  4. An Efficient Diffusion Scheme for Chaos-Based Digital Image Encryption

    Directory of Open Access Journals (Sweden)

    Jun-xin Chen

    2014-01-01

    Full Text Available In recent years, amounts of permutation-diffusion architecture-based image cryptosystems have been proposed. However, the key stream elements in the diffusion procedure are merely depending on the secret key that is usually fixed during the whole encryption process. Cryptosystems of this type suffer from unsatisfactory encryption speed and are considered insecure upon known/chosen plaintext attacks. In this paper, an efficient diffusion scheme is proposed. This scheme consists of two diffusion procedures, with a supplementary diffusion procedure padded after the normal diffusion. In the supplementary diffusion module, the control parameter of the selected chaotic map is altered by the resultant image produced after the normal diffusion operation. As a result, a slight difference in the plain image can be transferred to the chaotic iteration and bring about distinct key streams, and hence totally different cipher images will be produced. Therefore, the scheme can remarkably accelerate the diffusion effect of the cryptosystem and will effectively resist known/chosen plaintext attacks. Theoretical analyses and experimental results prove the high security performance and satisfactory operation efficiency of the proposed scheme.

  5. Construction of secure and fast hash functions using nonbinary error-correcting codes

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Preneel, Bart

    2002-01-01

    constructions based on block ciphers such as the Data Encryption Standard (DES), where the key size is slightly smaller than the block size; IDEA, where the key size is twice the block size; Advanced Encryption Standard (AES), with a variable key size; and to MD4-like hash functions. Under reasonable...

  6. Application of low bitrate image coding to surveillance of electric power facilities. Part 1. Proposal of low bitrate coding for surveillance of electric power facilities and examination of facilities region extraction method; Denryoku setsubi kanshi eno tei rate fugoka hoshiki no tekiyo. 1. Setsubi kanshiyo fugoka hoshiki no teian to setsubi ryoiki chushutsuho no kento

    Energy Technology Data Exchange (ETDEWEB)

    Murata, H.; Ishino, R. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1996-03-01

    Current status of low bitrate image coding has been investigated, and a low bitrate coding suitable for the surveillance of electric power facilities has been proposed, to extract its problems to be solved. For the conventional image coding, the waveform coding has been used by which the images are processed as signals. While, for the MPEG-4, a coding method with considering the image information has been proposed. For these coding methods, however, image information lacks details primarily, when lowering the bitrate. Accordingly, these methods can not be applied when the details in the images are important, such as in the case of surveillance of facilities. Then, the coding method has been proposed by expanding the partially detailed coding, and by separating constituent images of facilities, such as power cables and steel towers, designated by operators. It is the special feature of this method that the method can easily respond to the low bitrate and the detailed information can be conserved by using the structure extraction coding for the designated partial image which is generally processed by the low bitrate waveform coding. 29 refs., 17 figs., 1 tab.

  7. Information safety by suppression of chaos

    Energy Technology Data Exchange (ETDEWEB)

    Loskutov, A; Churaev, A A [Physics Faculty, Moscow State University, Moscow 119899 (Russian Federation)

    2005-01-01

    A new original method of information processing and secure communications based on the coding of alphabet symbols by stabilized cycles of certain perturbed one-dimensional dynamical systems is proposed. The foundation of the proposed method is ciphering by the one-to-one correspondence between periods of such cycles and certain alphabet symbols. It is shown that for some maps perturbations which lead to the stabilization of cycles of the given period, form some domain in the parametric space. This fact is used for coding identical symbols via random selection of parameters from this domain, that ensures that the probability of decoding the transmitting information by an external observer is zero. Analytic estimations of the admissible noise level in the communication channel and the randomness degree of transmitting signals are made. Some variants of the ciphered sequences are presented.

  8. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  9. Post-Processing of Dynamic Gadolinium-Enhanced Magnetic Resonance Imaging Exams of the Liver: Explanation and Potential Clinical Applications for Color-Coded Qualitative and Quantitative Analysis

    International Nuclear Information System (INIS)

    Wang, L.; Bos, I.C. Van den; Hussain, S.M.; Pattynama, P.M.; Vogel, M.W.; Kr estin, G.P.

    2008-01-01

    The purpose of this article is to explain and illustrate the current status and potential applications of automated and color-coded post-processing techniques for the analysis of dynamic multiphasic gadolinium-enhanced magnetic resonance imaging (MRI) of the liver. Post-processing of these images on dedicated workstations allows the generation of time-intensity curves (TIC) as well as color-coded images, which provides useful information on (neo)-angiogenesis within a liver lesion, if necessary combined with information on enhancement patterns of the surrounding liver parenchyma. Analysis of TIC and color-coded images, which are based on pharmacokinetic modeling, provides an easy-to-interpret schematic presentation of tumor behavior, providing additional characteristics for adequate differential diagnosis. Inclusion of TIC and color-coded images as part of the routine abdominal MRI workup protocol may help to further improve the specificity of MRI findings, but needs to be validated in clinical decision-making situations. In addition, these tools may facilitate the diagnostic workup of disease for detection, characterization, staging, and monitoring of antitumor therapy, and hold incremental value to the widely used tumor response criteria

  10. A study of the multiple pinhole coded aperture and the application of the minicomputer in image decoding

    International Nuclear Information System (INIS)

    Hasegawa, Takeo; Hashiba, Hiroshi; Akagi, Kiyoshi; Kobayashi, Akitoshi; Matsuda, Magoichi

    1979-01-01

    Research has been done on optically reconstructed imaging employing the Multiple Pinhole Coded Aperture (hereafter abbreviated as MPCA) in radioisotope tomographic imaging. However, problems remain in the optically reconstructed image method. Therefore, we employed a minicomputer (hereafter abbreviated as CPU) and developed the software for decoding and managing the radioisotope tomographic image. Combining the MPCA and the CPU system, we were able to decode and manage the radioisotope tomographic image. 1) In comparison to the optically decoded MPCA image, various input commands are possibly in the CPU method according to the dialogue between the CPU and the on line typewriter. In addition to this, decoded tomographic images of unrestricted depth are readily attainable. 2) In the CPU method noise elimination and other aspects of image management can be easily performed. (author)

  11. Smoothing-Based Relative Navigation and Coded Aperture Imaging

    Science.gov (United States)

    Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.

  12. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  13. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    Science.gov (United States)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  14. Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8

    Science.gov (United States)

    Sison, Virgilio; Remillion, Monica

    2017-10-01

    Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.

  15. On the security of 3D Cat map based symmetric image encryption scheme

    International Nuclear Information System (INIS)

    Wang Kai; Pei, W.-J.; Zou, Liuhua; Song Aiguo; He Zhenya

    2005-01-01

    A 3D Cat map based symmetric image encryption algorithm, which significantly increases the resistance against statistical and differential attacks, has been proposed recently. It employs a 3D Cat map to shuffle the positions of image pixels and uses the Logistic map to diffuse the relationship between the cipher-image and the plain-image. Based on the factor that it is sufficient to break this cryptosystem only with the equivalent control parameters, some fundamental weaknesses of the cryptosystem are pointed out. With the knowledge of symbolic dynamics and some specially designed plain-images, we can calculate the equivalent initial condition of diffusion process and rebuild a valid equivalent 3D Cat matrix. In this Letter, we will propose a successful chosen-plain-text cryptanalytic attack, which is composed of two mutually independent procedures: the cryptanalysis of the diffusion process and the cryptanalysis of the spatial permutation process. Both theoretical and experimental results show that the lack of security discourages the use of these cryptosystems for practical applications

  16. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  17. Ink-constrained halftoning with application to QR codes

    Science.gov (United States)

    Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary

    2014-01-01

    This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.

  18. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle...

  19. Automatic choroid cells segmentation and counting based on approximate convexity and concavity of chain code in fluorescence microscopic image

    Science.gov (United States)

    Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu

    2015-03-01

    In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.

  20. Monte Carlo code for neutron radiography

    International Nuclear Information System (INIS)

    Milczarek, Jacek J.; Trzcinski, Andrzej; El-Ghany El Abd, Abd; Czachor, Andrzej

    2005-01-01

    The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms

  1. Monte Carlo code for neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Milczarek, Jacek J. [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)]. E-mail: jjmilcz@cyf.gov.pl; Trzcinski, Andrzej [Institute for Nuclear Studies, Swierk, 05-400 Otwock (Poland); El-Ghany El Abd, Abd [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland); Nuclear Research Center, PC 13759, Cairo (Egypt); Czachor, Andrzej [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)

    2005-04-21

    The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms.

  2. Nonlinear QR code based optical image encryption using spiral phase transform, equal modulus decomposition and singular value decomposition

    Science.gov (United States)

    Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.

    2018-01-01

    In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.

  3. Chaotic Image Encryption Based on Running-Key Related to Plaintext

    Directory of Open Access Journals (Sweden)

    Cao Guanghui

    2014-01-01

    Full Text Available In the field of chaotic image encryption, the algorithm based on correlating key with plaintext has become a new developing direction. However, for this kind of algorithm, some shortcomings in resistance to reconstruction attack, efficient utilization of chaotic resource, and reducing dynamical degradation of digital chaos are found. In order to solve these problems and further enhance the security of encryption algorithm, based on disturbance and feedback mechanism, we present a new image encryption scheme. In the running-key generation stage, by successively disturbing chaotic stream with cipher-text, the relation of running-key to plaintext is established, reconstruction attack is avoided, effective use of chaotic resource is guaranteed, and dynamical degradation of digital chaos is minimized. In the image encryption stage, by introducing random-feedback mechanism, the difficulty of breaking this scheme is increased. Comparing with the-state-of-the-art algorithms, our scheme exhibits good properties such as large key space, long key period, and extreme sensitivity to the initial key and plaintext. Therefore, it can resist brute-force, reconstruction attack, and differential attack.

  4. Chaotic image encryption based on running-key related to plaintext.

    Science.gov (United States)

    Guanghui, Cao; Kai, Hu; Yizhi, Zhang; Jun, Zhou; Xing, Zhang

    2014-01-01

    In the field of chaotic image encryption, the algorithm based on correlating key with plaintext has become a new developing direction. However, for this kind of algorithm, some shortcomings in resistance to reconstruction attack, efficient utilization of chaotic resource, and reducing dynamical degradation of digital chaos are found. In order to solve these problems and further enhance the security of encryption algorithm, based on disturbance and feedback mechanism, we present a new image encryption scheme. In the running-key generation stage, by successively disturbing chaotic stream with cipher-text, the relation of running-key to plaintext is established, reconstruction attack is avoided, effective use of chaotic resource is guaranteed, and dynamical degradation of digital chaos is minimized. In the image encryption stage, by introducing random-feedback mechanism, the difficulty of breaking this scheme is increased. Comparing with the-state-of-the-art algorithms, our scheme exhibits good properties such as large key space, long key period, and extreme sensitivity to the initial key and plaintext. Therefore, it can resist brute-force, reconstruction attack, and differential attack.

  5. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  6. Steganographic optical image encryption system based on reversible data hiding and double random phase encoding

    Science.gov (United States)

    Chuang, Cheng-Hung; Chen, Yen-Lin

    2013-02-01

    This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.

  7. An algorithm for the construction of substitution box for block ciphers based on projective general linear group

    Directory of Open Access Journals (Sweden)

    Anas Altaleb

    2017-03-01

    Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.

  8. A study of the decoding of multiple pinhole coded aperture RI tomographic images

    International Nuclear Information System (INIS)

    Hasegawa, Takeo; Kobayashi, Akitoshi; Nishiyama, Yutaka; Akagi, Kiyoshi; Uehata, Hiroshi

    1981-01-01

    In order to obtain a radioisotope (RI) tomographic image, there are various, methods, including the RCT method, Time Modulate method, and Multiple Pinhole Coded Aperture (MPCA) method and others. The MPCA method has several advantages. Using the MPCA method, there is no need to move either the detector or the patient, Furthermore, the generally used γ-camera may be used without any alterations. Due to certain problems in reconstructing the tomographic image, the use of the MPCA method in clinical practice is limited to representation of small organs (e.g. heart) using the 7-Pinhole collimator. This research presents an experimental approach to overcome the problems in reconstruction of tomographic images of large organs (organs other than the heart, such as the brain, liver, lung etc.) by introducing a reconstruction algorithm and correction software into the MPCA method. There are 2 main problems in MPCA image reconstruction: (1) Due to the rounding-off procedure, there is both point omission and shifting of point coordinates. (2) The central portion is characterized by high-counts. Both of these problems were solved by incorporating a reconstruction algorithm and a correction function. The resultant corrected tomographic image was processed using a filter derived from subjecting a PSF to a Fourier transform. Thus, it has become possible to obtain a high-quality tomographic image of large organs for clinical use. (author)

  9. Implementation Of Secure 6LoWPAN Communications For Tactical Wireless Sensor Networks

    Science.gov (United States)

    2016-09-01

    Adapted from [20]. .........................................40  Table 2.  Power Draw and Duration Time to Perform AES -128 Encryption . Adapted from... Encryption Standard AES -CCM Advanced Encryption Standard-Counter with Cipher Block Chaining-Message Authentication Code AH Authenticated Header BEB...Authentication Code ( AES - CCM) is the suggested method within the 6LoWPAN standard [5]. Within the encryption method, an Initialization Vector (IV) is used

  10. A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud.

    Science.gov (United States)

    Seenivasagam, V; Velumani, R

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  11. A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud

    Directory of Open Access Journals (Sweden)

    V. Seenivasagam

    2013-01-01

    Full Text Available Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT—Singular Value Decomposition (SVD domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu’s invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  12. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  13. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  14. Mask design and fabrication in coded aperture imaging

    International Nuclear Information System (INIS)

    Shutler, Paul M.E.; Springham, Stuart V.; Talebitaher, Alireza

    2013-01-01

    We introduce the new concept of a row-spaced mask, where a number of blank rows are interposed between every pair of adjacent rows of holes of a conventional cyclic difference set based coded mask. At the cost of a small loss in signal-to-noise ratio, this can substantially reduce the number of holes required to image extended sources, at the same time increasing mask strength uniformly across the aperture, as well as making the mask automatically self-supporting. We also show that the Finger and Prince construction can be used to wrap any cyclic difference set onto a two-dimensional mask, regardless of the number of its pixels. We use this construction to validate by means of numerical simulations not only the performance of row-spaced masks, but also the pixel padding technique introduced by in ’t Zand. Finally, we provide a computer program CDSGEN.EXE which, on a fast modern computer and for any Singer set of practical size and open fraction, generates the corresponding pattern of holes in seconds

  15. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  16. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  17. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  18. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  19. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  20. SETI-EC: SETI Encryption Code

    Science.gov (United States)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  1. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  2. Asymmetric double-image encryption method by using iterative phase retrieval algorithm in fractional Fourier transform domain

    Science.gov (United States)

    Sui, Liansheng; Lu, Haiwei; Ning, Xiaojuan; Wang, Yinghui

    2014-02-01

    A double-image encryption scheme is proposed based on an asymmetric technique, in which the encryption and decryption processes are different and the encryption keys are not identical to the decryption ones. First, a phase-only function (POF) of each plain image is retrieved by using an iterative process and then encoded into an interim matrix. Two interim matrices are directly modulated into a complex image by using the convolution operation in the fractional Fourier transform (FrFT) domain. Second, the complex image is encrypted into the gray scale ciphertext with stationary white-noise distribution by using the FrFT. In the encryption process, three random phase functions are used as encryption keys to retrieve the POFs of plain images. Simultaneously, two decryption keys are generated in the encryption process, which make the optical implementation of the decryption process convenient and efficient. The proposed encryption scheme has high robustness to various attacks, such as brute-force attack, known plaintext attack, cipher-only attack, and specific attack. Numerical simulations demonstrate the validity and security of the proposed method.

  3. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  4. Potential of coded excitation in medical ultrasound imaging

    DEFF Research Database (Denmark)

    Misaridis, Athanasios; Gammelmark, Kim; Jørgensen, C. H.

    2000-01-01

    Improvement in SNR and/or penetration depth can be achieved in medical ultrasoundby using long coded waveforms, in a similar manner as in radars or sonars.However, the time-bandwidth product (TB) improvement, and thereby SNRimprovement is considerably lower in medical ultrasound, due...... codes have a larger bandwidth than the transducerin a typical medical ultrasound system can drive, a more careful code designhas been proven essential. Simulation results are also presented forcomparison.This paper presents an improved non-linear FM signal appropriatefor ultrasonic applications. The new...... coded waveform exhibits distinctfeatures, that make it very attractive in the implementation of codedultrasound systems. The range resolution that can be achieved is comparableto that of a conventional system, depending on the transducer's bandwidth andcan even be better for broad-band transducers...

  5. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    International Nuclear Information System (INIS)

    Trejos, Sorayda; Barrera, John Fredy; Torroba, Roberto

    2015-01-01

    We present for the first time an optical encrypting–decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome. (paper)

  6. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    Science.gov (United States)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  7. Sub-millimeter planar imaging with positron emitters: EGS4 code simulation and experimental results

    International Nuclear Information System (INIS)

    Bollini, D.; Del Guerra, A.; Di Domenico, G.

    1996-01-01

    Experimental data for Planar Imaging with positron emitters (pulse height, efficiency and spatial resolution) obtained with two matrices of 25 crystals (2 x 2 x 30 mm 3 each) of YAP:Ce coupled with a Position Sensitive PhotoMultiplier (Hamamatsu R2486-06) have been reproduced with high accuracy using the EGS4 code. Extensive simulation provides a detailed description of the performance of this type of detector as a function of the matrix granularity, the geometry of the detector and detection threshold. We present the Monte Carlo simulation and the preliminary experimental results of a prototype planar imaging system made of two matrices, each one consisting of 400 (2 x 2 x 30 mm 3 ) crystals of YAP-Ce

  8. Compression and channel-coding algorithms for high-definition television signals

    Science.gov (United States)

    Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.

    1990-09-01

    In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.

  9. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  10. Image and Dose Simulation in Support of New Imaging Modalities

    International Nuclear Information System (INIS)

    Kuruvilla Verghese

    2002-01-01

    This report summarizes the highlights of the research performed under the 2-year NEER grant from the Department of Energy. The primary outcome of the work was a new Monte Carlo code, MCMIS-DS, for Monte Carlo for Mammography Image Simulation including Differential Sampling. The code was written to generate simulated images and dose distributions from two different new digital x-ray imaging modalities, namely, synchrotron imaging (SI) and a slot geometry digital mammography system called Fisher Senoscan. A differential sampling scheme was added to the code to generate multiple images that included variations in the parameters of the measurement system and the object in a single execution of the code. The code is to serve multiple purposes; (1) to answer questions regarding the contribution of scattered photons to images, (2) for use in design optimization studies, and (3) to do up to second-order perturbation studies to assess the effects of design parameter variations and/or physical parameters of the object (the breast) without having to re-run the code for each set of varied parameters. The accuracy and fidelity of the code were validated by a large variety of benchmark studies using published data and also using experimental results from mammography phantoms on both imaging modalities

  11. Orthogonal transformations for change detection, Matlab code

    DEFF Research Database (Denmark)

    2005-01-01

    Matlab code to do multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data.......Matlab code to do multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data....

  12. Progressive Coding and Presentation of Maps for Internet Applications

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren Otto

    1999-01-01

    A new lossless context based method for content progressive coding of images as maps is proposed.......A new lossless context based method for content progressive coding of images as maps is proposed....

  13. On video formats and coding efficiency

    NARCIS (Netherlands)

    Bellers, E.B.; Haan, de G.

    2001-01-01

    This paper examines the efficiency of MPEG-2 coding for interlaced and progressive video, and compares de-interlacing and picture rate up-conversion before and after coding. We found receiver side de-interlacing and picture rate up-conversion (i.e. after coding) to give better image quality at a

  14. Convolutional Sparse Coding for Static and Dynamic Images Analysis

    Directory of Open Access Journals (Sweden)

    B. A. Knyazev

    2014-01-01

    Full Text Available The objective of this work is to improve performance of static and dynamic objects recognition. For this purpose a new image representation model and a transformation algorithm are proposed. It is examined and illustrated that limitations of previous methods make it difficult to achieve this objective. Static images, specifically handwritten digits of the widely used MNIST dataset, are the primary focus of this work. Nevertheless, preliminary qualitative results of image sequences analysis based on the suggested model are presented.A general analytical form of the Gabor function, often employed to generate filters, is described and discussed. In this research, this description is required for computing parameters of responses returned by our algorithm. The recursive convolution operator is introduced, which allows extracting free shape features of visual objects. The developed parametric representation model is compared with sparse coding based on energy function minimization.In the experimental part of this work, errors of estimating the parameters of responses are determined. Also, parameters statistics and their correlation coefficients for more than 106 responses extracted from the MNIST dataset are calculated. It is demonstrated that these data correspond well with previous research studies on Gabor filters as well as with works on visual cortex primary cells of mammals, in which similar responses were observed. A comparative test of the developed model with three other approaches is conducted; speed and accuracy scores of handwritten digits classification are presented. A support vector machine with a linear or radial basic function is used for classification of images and their representations while principal component analysis is used in some cases to prepare data beforehand. High accuracy is not attained due to the specific difficulties of combining our model with a support vector machine (a 3.99% error rate. However, another method is

  15. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  16. Fluorogenic RNA Mango aptamers for imaging small non-coding RNAs in mammalian cells.

    Science.gov (United States)

    Autour, Alexis; C Y Jeng, Sunny; D Cawte, Adam; Abdolahzadeh, Amir; Galli, Angela; Panchapakesan, Shanker S S; Rueda, David; Ryckelynck, Michael; Unrau, Peter J

    2018-02-13

    Despite having many key roles in cellular biology, directly imaging biologically important RNAs has been hindered by a lack of fluorescent tools equivalent to the fluorescent proteins available to study cellular proteins. Ideal RNA labelling systems must preserve biological function, have photophysical properties similar to existing fluorescent proteins, and be compatible with established live and fixed cell protein labelling strategies. Here, we report a microfluidics-based selection of three new high-affinity RNA Mango fluorogenic aptamers. Two of these are as bright or brighter than enhanced GFP when bound to TO1-Biotin. Furthermore, we show that the new Mangos can accurately image the subcellular localization of three small non-coding RNAs (5S, U6, and a box C/D scaRNA) in fixed and live mammalian cells. These new aptamers have many potential applications to study RNA function and dynamics both in vitro and in mammalian cells.

  17. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  18. Fulltext PDF

    Indian Academy of Sciences (India)

    computer science. The award is named after Alan Turing, inventor of the. Turing machine, articulator of the limits of computing, and a key player in the. Allied breaking of the German Enigma cipher in World War II. The award ... Cocke in 1971, she provided a systematic categorization of code improvement transformations.

  19. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    Science.gov (United States)

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.

  20. Optical identity authentication technique based on compressive ghost imaging with QR code

    Science.gov (United States)

    Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang

    2018-04-01

    With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.

  1. Fast decoding algorithms for coded aperture systems

    International Nuclear Information System (INIS)

    Byard, Kevin

    2014-01-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques

  2. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R; Albanese, K; Lakshmanan, M; Greenberg, J; Kapadia, A [Duke University Medical Center, Durham, NC, Carl E Ravin Advanced Imaging Laboratories, Durham, NC (United States)

    2015-06-15

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality for breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded

  3. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  4. QR code optical encryption using spatially incoherent illumination

    Science.gov (United States)

    Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.

    2017-02-01

    Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129  ×  129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.

  5. pix2code: Generating Code from a Graphical User Interface Screenshot

    OpenAIRE

    Beltramelli, Tony

    2017-01-01

    Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).

  6. Color image encryption based on Coupled Nonlinear Chaotic Map

    International Nuclear Information System (INIS)

    Mazloom, Sahar; Eftekhari-Moghadam, Amir Masud

    2009-01-01

    Image encryption is somehow different from text encryption due to some inherent features of image such as bulk data capacity and high correlation among pixels, which are generally difficult to handle by conventional methods. The desirable cryptographic properties of the chaotic maps such as sensitivity to initial conditions and random-like behavior have attracted the attention of cryptographers to develop new encryption algorithms. Therefore, recent researches of image encryption algorithms have been increasingly based on chaotic systems, though the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. This paper proposes a Coupled Nonlinear Chaotic Map, called CNCM, and a novel chaos-based image encryption algorithm to encrypt color images by using CNCM. The chaotic cryptography technique which used in this paper is a symmetric key cryptography with a stream cipher structure. In order to increase the security of the proposed algorithm, 240 bit-long secret key is used to generate the initial conditions and parameters of the chaotic map by making some algebraic transformations to the key. These transformations as well as the nonlinearity and coupling structure of the CNCM have enhanced the cryptosystem security. For getting higher security and higher complexity, the current paper employs the image size and color components to cryptosystem, thereby significantly increasing the resistance to known/chosen-plaintext attacks. The results of several experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for real-time image encryption and transmission.

  7. SU-F-I-53: Coded Aperture Coherent Scatter Spectral Imaging of the Breast: A Monte Carlo Evaluation of Absorbed Dose

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R [Durham, NC (United States); Lakshmanan, M; Fong, G; Kapadia, A [Carl E Ravin Advanced Imaging Laboratories, Durham, NC (United States); Greenberg, J [Duke University, Durham, NC (United States)

    2016-06-15

    Purpose: Coherent scatter based imaging has shown improved contrast and molecular specificity over conventional digital mammography however the biological risks have not been quantified due to a lack of accurate information on absorbed dose. This study intends to characterize the dose distribution and average glandular dose from coded aperture coherent scatter spectral imaging of the breast. The dose deposited in the breast from this new diagnostic imaging modality has not yet been quantitatively evaluated. Here, various digitized anthropomorphic phantoms are tested in a Monte Carlo simulation to evaluate the absorbed dose distribution and average glandular dose using clinically feasible scan protocols. Methods: Geant4 Monte Carlo radiation transport simulation software is used to replicate the coded aperture coherent scatter spectral imaging system. Energy sensitive, photon counting detectors are used to characterize the x-ray beam spectra for various imaging protocols. This input spectra is cross-validated with the results from XSPECT, a commercially available application that yields x-ray tube specific spectra for the operating parameters employed. XSPECT is also used to determine the appropriate number of photons emitted per mAs of tube current at a given kVp tube potential. With the implementation of the XCAT digital anthropomorphic breast phantom library, a variety of breast sizes with differing anatomical structure are evaluated. Simulations were performed with and without compression of the breast for dose comparison. Results: Through the Monte Carlo evaluation of a diverse population of breast types imaged under real-world scan conditions, a clinically relevant average glandular dose for this new imaging modality is extrapolated. Conclusion: With access to the physical coherent scatter imaging system used in the simulation, the results of this Monte Carlo study may be used to directly influence the future development of the modality to keep breast dose to

  8. Genetic Recombination Between Stromal and Cancer Cells Results in Highly Malignant Cells Identified by Color-Coded Imaging in a Mouse Lymphoma Model.

    Science.gov (United States)

    Nakamura, Miki; Suetsugu, Atsushi; Hasegawa, Kousuke; Matsumoto, Takuro; Aoki, Hitomi; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Hoffman, Robert M

    2017-12-01

    The tumor microenvironment (TME) promotes tumor growth and metastasis. We previously established the color-coded EL4 lymphoma TME model with red fluorescent protein (RFP) expressing EL4 implanted in transgenic C57BL/6 green fluorescent protein (GFP) mice. Color-coded imaging of the lymphoma TME suggested an important role of stromal cells in lymphoma progression and metastasis. In the present study, we used color-coded imaging of RFP-lymphoma cells and GFP stromal cells to identify yellow-fluorescent genetically recombinant cells appearing only during metastasis. The EL4-RFP lymphoma cells were injected subcutaneously in C57BL/6-GFP transgenic mice and formed subcutaneous tumors 14 days after cell transplantation. The subcutaneous tumors were harvested and transplanted to the abdominal cavity of nude mice. Metastases to the liver, perigastric lymph node, ascites, bone marrow, and primary tumor were imaged. In addition to EL4-RFP cells and GFP-host cells, genetically recombinant yellow-fluorescent cells, were observed only in the ascites and bone marrow. These results indicate genetic exchange between the stromal and cancer cells. Possible mechanisms of genetic exchange are discussed as well as its ramifications for metastasis. J. Cell. Biochem. 118: 4216-4221, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  10. Angular resolution study of a combined gamma-neutron coded aperture imager for standoff detection

    International Nuclear Information System (INIS)

    Ayaz-Maierhafer, Birsen; Hayward, Jason P.; Ziock, Klaus P.; Blackston, Matthew A.; Fabris, Lorenzo

    2013-01-01

    Nuclear threat source observables at standoff distances of tens of meters from mCi class sources include both gamma-rays and neutrons. This work uses simulations to investigate the effects of the angular resolution of a mobile gamma-ray and neutron coded aperture imaging system upon orphan source detection significance and specificity. The design requires maintaining high sensitivity and specificity while keeping the system size as compact as possible to reduce weight, footprint, and cost. A mixture of inorganic and organic scintillators was considered in the detector plane for high sensitivity to both gamma-rays and fast neutrons. For gamma-rays (100 to 2500 keV) and fission spectrum neutrons, angular resolutions of 1–9° and radiation angles of incidence appropriate for mobile search were evaluated. Detection significance for gamma-rays considers those events that contribute to the photopeak of the image pixel corresponding the orphan source location. For detection of fission spectrum neutrons, energy depositions above a set pulse shape discrimination threshold were tallied. The results show that the expected detection significance for the system at an angular resolution of 1° is significantly lower compared to its detection significance an angular resolution of ∼3–4°. An angular resolution of ∼3–4° is recommended both for better detection significance and improved false alarm rate, considering that finer angular resolution does not result in improved background rejection when the coded aperture method is used. Instead, over-pixelating the search space may result in an unacceptably high false alarm rate

  11. Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.

    Science.gov (United States)

    Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2016-08-01

    Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  12. Color-Coded Imaging of Syngeneic Orthotopic Malignant Lymphoma Interacting with Host Stromal Cells During Metastasis.

    Science.gov (United States)

    Matsumoto, Takuro; Suetsugu, Atsushi; Hasegawa, Kosuke; Nakamura, Miki; Aoki, Hitomi; Kunisada, Takahiro; Tsurumi, Hisashi; Shimizu, Masahito; Hoffman, Robert M

    2016-04-01

    The EL4 cell line was previously derived from a lymphoma induced in a C57/BL6 mouse by 9,10-dimethyl-1,2-benzanthracene. In a previous study, EL4 lymphoma cells expressing red fluorescent protein (EL4-RFP) were established and injected into the tail vein of C57/BL6 green fluorescent protein (GFP) transgenic mice. Metastasis was observed at multiple sites which were also enriched with host GFP-expressing stromal cells. In the present study, our aim was to establish an orthotopic model of EL4-RFP. In the present study, EL4-RFP lymphoma cells were injected in the spleen of C57/BL6 GFP transgenic mice as an orthotopic model of lymphoma. Resultant primary tumor and metastases were imaged with the Olympus FV1000 scanning laser confocal microscope. EL4-RFP metastasis was observed 21 days later. EL4-RFP tumors in the spleen (primary injection site), liver, supra-mediastinum lymph nodes, abdominal lymph nodes, bone marrow, and lung were visualized by color-coded imaging. EL4-RFP metastases in the liver, lymph nodes, and bone marrow in C57/BL6 GFP mice were rich in GFP stromal cells such as macrophages, fibroblasts, dendritic cells, and normal lymphocytes derived from the host animal. Small tumors were observed in the spleen, which were rich in host stromal cells. In the lung, no mass formation of lymphoma cells occurred, but lymphoma cells circulated in lung peripheral blood vessels. Phagocytosis of EL4-RFP lymphoma cells by macrophages, as well as dendritic cells and fibroblasts, were observed in culture. Color-coded imaging of the lymphoma microenvironment suggests an important role of stromal cells in lymphoma progression and metastasis. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  13. Imaging systems in nuclear medicine and image evaluation

    International Nuclear Information System (INIS)

    Beck, R.; Charleston, D.; Metz, C.; Tsui, B.

    1981-01-01

    A general computer code to simulate the imaging properties of existing and hypothetical imaging systems viewing realistic source distributions within non-uniform media. Such a code allows comparative evaluations of existing and hypothetical systems, and optimization of critical parameters of system design by maximizing the signal-to-noise ratio. To be most useful, such a code allows simulation of conventional scintillation scanners and cameras as well as single-photon and position tomographic systems

  14. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  15. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    Science.gov (United States)

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  16. A New Application of Stochastic Transformation

    OpenAIRE

    Nilar Win Kyaw

    2009-01-01

    In cryptography, confusion and diffusion are very important to get confidentiality and privacy of message in block ciphers and stream ciphers. There are two types of network to provide confusion and diffusion properties of message in block ciphers. They are Substitution- Permutation network (S-P network), and Feistel network. NLFS (Non-Linear feedback stream cipher) is a fast and secure stream cipher for software application. NLFS have two modes basic mode that is synchro...

  17. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  18. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  19. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  20. Understanding and applying cryptography and data security

    CERN Document Server

    Elbirt, Adam J

    2009-01-01

    Introduction A Brief History of Cryptography and Data Security Cryptography and Data Security in the Modern World Existing Texts Book Organization Symmetric-Key Cryptography Cryptosystem Overview The Modulo Operator Greatest Common Divisor The Ring ZmHomework ProblemsSymmetric-Key Cryptography: Substitution Ciphers Basic Cryptanalysis Shift Ciphers Affine Ciphers Homework ProblemsSymmetric-Key Cryptography: Stream Ciphers Random Numbers The One-Time Pad Key Stream GeneratorsReal-World ApplicationsHomework ProblemsSymmetric-Key Cryptography: Block Ciphers The Data Encryption StandardThe Advance

  1. An analytical demonstration of coupling schemes between magnetohydrodynamic codes and eddy current codes

    International Nuclear Information System (INIS)

    Liu Yueqiang; Albanese, R.; Rubinacci, G.; Portone, A.; Villone, F.

    2008-01-01

    In order to model a magnetohydrodynamic (MHD) instability that strongly couples to external conducting structures (walls and/or coils) in a fusion device, it is often necessary to combine a MHD code solving for the plasma response, with an eddy current code computing the fields and currents of conductors. We present a rigorous proof of the coupling schemes between these two types of codes. One of the coupling schemes has been introduced and implemented in the CARMA code [R. Albanese, Y. Q. Liu, A. Portone, G. Rubinacci, and F. Villone, IEEE Trans. Magn. 44, 1654 (2008); A. Portone, F. Villone, Y. Q. Liu, R. Albanese, and G. Rubinacci, Plasma Phys. Controlled Fusion 50, 085004 (2008)] that couples the MHD code MARS-F[Y. Q. Liu, A. Bondeson, C. M. Fransson, B. Lennartson, and C. Breitholtz, Phys. Plasmas 7, 3681 (2000)] and the eddy current code CARIDDI[R. Albanese and G. Rubinacci, Adv. Imaging Electron Phys. 102, 1 (1998)]. While the coupling schemes are described for a general toroidal geometry, we give the analytical proof for a cylindrical plasma.

  2. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  3. Cryptanalysis of the Sodark Family of Cipher Algorithms

    Science.gov (United States)

    2017-09-01

    software project for building three-bit LUT circuit representations of S- boxes is available as a GitHub repository [40]. It contains several improvements...DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The...second- and third-generation automatic link establishment (ALE) systems for high frequency radios. Radios utilizing ALE technology are in use by a

  4. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  5. High-frequency Total Focusing Method (TFM) imaging in strongly attenuating materials with the decomposition of the time reversal operator associated with orthogonal coded excitations

    Science.gov (United States)

    Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire

    2017-02-01

    In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.

  6. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  7. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  8. Architecture of security management unit for safe hosting of multiple agents

    Science.gov (United States)

    Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques

    1999-04-01

    In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.

  9. TOWARDS ENERGY-AWARE CODING PRACTICES FOR ANDROID

    Directory of Open Access Journals (Sweden)

    João SARAIVA

    2018-03-01

    Full Text Available This paper studies how the use of different coding practices when developing Android applications influence energy consumption. We consider two common Java/Android programming practices, namely string operations and (non cached image loading, and we show the energy profile of different coding practices for doing them. With string operations, we compare the performance of the usage of the standard String class to the usage of the StringBuilder class, while with our second practice we evaluate the benefits of image caching with asynchronous loading. We externally measure energy consumption of the example applications using the Trepn profiler application by Qualcomm. Our preliminary results show that selected coding practices do significantly affect energy consumption, in the particular cases of our practice selection, this difference varies between 20% and 50%.

  10. A New Chaos-Based Color Image Encryption Scheme with an Efficient Substitution Keystream Generation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2018-01-01

    Full Text Available This paper suggests a new chaos-based color image cipher with an efficient substitution keystream generation strategy. The hyperchaotic Lü system and logistic map are employed to generate the permutation and substitution keystream sequences for image data scrambling and mixing. In the permutation stage, the positions of colored subpixels in the input image are scrambled using a pixel-swapping mechanism, which avoids two main problems encountered when using the discretized version of area-preserving chaotic maps. In the substitution stage, we introduce an efficient keystream generation method that can extract three keystream elements from the current state of the iterative logistic map. Compared with conventional method, the total number of iterations is reduced by 3 times. To ensure the robustness of the proposed scheme against chosen-plaintext attack, the current state of the logistic map is perturbed during each iteration and the disturbance value is determined by plain-pixel values. The mechanism of associating the keystream sequence with plain-image also helps accelerate the diffusion process and increase the degree of randomness of the keystream sequence. Experimental results demonstrate that the proposed scheme has a satisfactory level of security and outperforms the conventional schemes in terms of computational efficiency.

  11. DMAC-AN INTEGRATED ENCRYPTION SCHEME WITH RSA FOR AC TO OBSTRUCT INFERENCE ATTACKS

    Directory of Open Access Journals (Sweden)

    R. Jeeva

    2012-12-01

    Full Text Available The proposal of indistinguishable encryption in Randomized Arithmetic Coding(RAC doesn’t make the system efficient because it was not encrypting the messages it sends. It recomputes the cipher form of every messages it sends that increases not only the computational cost but also increases the response time.Floating point representation in cipher increases the difficulty in decryption side because of loss in precison.RAC doesn’t handle the inference attacks like Man-in-Middle attack,Third party attack etc. In our system, Dynamic Matrix Arithmetic Coding(DMAC using dynamic session matrix to encrypt the messages. The size of the matrix is deduced from the session key that contains ID of end users which proves the server authentication.Nonce values is represented as the public key of the opponents encrypted by the session key will be exchanged between the end users to provide mutual authentication. If the adversary try to compromise either server or end users,the other system won’t respond and the intrusion will be easily detected. we have increased the hacking complexity of AC by integrating with RSA upto 99%.

  12. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs.

    Directory of Open Access Journals (Sweden)

    Jamil Ahmad

    Full Text Available Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.

  13. A novel image encryption algorithm based on a 3D chaotic map

    Science.gov (United States)

    Kanso, A.; Ghebleh, M.

    2012-07-01

    Recently [Solak E, Çokal C, Yildiz OT Biyikoǧlu T. Cryptanalysis of Fridrich's chaotic image encryption. Int J Bifur Chaos 2010;20:1405-1413] cryptanalyzed the chaotic image encryption algorithm of [Fridrich J. Symmetric ciphers based on two-dimensional chaotic maps. Int J Bifur Chaos 1998;8(6):1259-1284], which was considered a benchmark for measuring security of many image encryption algorithms. This attack can also be applied to other encryption algorithms that have a structure similar to Fridrich's algorithm, such as that of [Chen G, Mao Y, Chui, C. A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos Soliton Fract 2004;21:749-761]. In this paper, we suggest a novel image encryption algorithm based on a three dimensional (3D) chaotic map that can defeat the aforementioned attack among other existing attacks. The design of the proposed algorithm is simple and efficient, and based on three phases which provide the necessary properties for a secure image encryption algorithm including the confusion and diffusion properties. In phase I, the image pixels are shuffled according to a search rule based on the 3D chaotic map. In phases II and III, 3D chaotic maps are used to scramble shuffled pixels through mixing and masking rules, respectively. Simulation results show that the suggested algorithm satisfies the required performance tests such as high level security, large key space and acceptable encryption speed. These characteristics make it a suitable candidate for use in cryptographic applications.

  14. Recent advances in coding theory for near error-free communications

    Science.gov (United States)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  15. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  16. Learning binary code via PCA of angle projection for image retrieval

    Science.gov (United States)

    Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong

    2018-01-01

    With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.

  17. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  18. Management of oral and maxillofacial radiological images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2002-01-01

    To implement the database system of oral and maxillofacial radiological images using a commercial medical image management software with personally developed classification code. The image database was built using a slightly modified commercial medical image management software, Dr. Image v.2.1 (Bit Computer Co., Korea). The function of wild card '*' was added to the search function of this program. Diagnosis classification codes were written as the number at the first three digits, and radiographic technique classification codes as the alphabet right after the diagnosis code. 449 radiological films of 218 cases from January, 2000 to December, 2000, which had been specially stored for the demonstration and education at Dept. of OMF Radiology of Dankook University Dental Hospital, were scanned with each patient information. Cases could be efficiently accessed and analyzed by using the classification code. Search and statistics results were easily obtained according to sex, age, disease diagnosis and radiographic technique. Efficient image management was possible with this image database system. Application of this system to other departments or personal image management can be made possible by utilizing the appropriate classification code system.

  19. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  20. Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors

    Science.gov (United States)

    Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.

    The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several

  1. Orthogonal transformations for change detection, Matlab code (ENVI-like headers)

    DEFF Research Database (Denmark)

    2007-01-01

    Matlab code to do (iteratively reweighted) multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data; accommodates ENVI (like) header files.......Matlab code to do (iteratively reweighted) multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data; accommodates ENVI (like) header files....

  2. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    Science.gov (United States)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  3. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  4. Combinatorial Image Entropy

    DEFF Research Database (Denmark)

    Yuri, Shtarkov; Justesen, Jørn

    1997-01-01

    The concept of entropy for an image on a discrete two dimensional grid is introduced. This concept is used as an information theoretic bound on the coding rate for the image. It is proved that this quantity exists as a limit for arbitrary sets satisfying certain conditions.......The concept of entropy for an image on a discrete two dimensional grid is introduced. This concept is used as an information theoretic bound on the coding rate for the image. It is proved that this quantity exists as a limit for arbitrary sets satisfying certain conditions....

  5. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  6. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  7. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  8. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  9. Filtering, Coding, and Compression with Malvar Wavelets

    Science.gov (United States)

    1993-12-01

    speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates

  10. A 2 x 2 imaging MIMO system based on LED Visible Light Communications employing space balanced coding and integrated PIN array reception

    DEFF Research Database (Denmark)

    Li, Jiehui; Xu, Yinfan; Shi, Jianyang

    2016-01-01

    In this paper, we proposed a 2 x 2 imaging Multi-Input Multi-Output (MIMO)-Visible Light Communication (VLC) system by employing Space Balanced Coding (SBC) based on two RGB LEDs and integrated PIN array reception. We experimentally demonstrated 1.4-Gbit/s VLC transmission at a distance of 2.5 m...

  11. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  12. A Gray-code-based color image representation method using TSNAM%TSNAM彩色图像的格雷码表示

    Institute of Scientific and Technical Information of China (English)

    郑运平; 张佳婧

    2012-01-01

    为了提高彩色图像模式的表示效率,借助于三角形和正方形布局问题的思想,将格雷码和位平面分解方法应用到彩色图像的三角形和正方形NAM表示方法(TSNAM)中,提出了一种基于格雷码的TSNAM彩色图像表示方法(GTSNAM).给出了GTSNAM表示算法的形式化描述,并对其存储结构、总数据量和时空复杂性进行了分析.理论分析和实验结果表明,与最新提出的TSNAM表示方法和经典的线性四元树(LQT)表示方法相比,GTSNAM表示方法具有更少的子模式数(或节点数),能够更有效地减少数据存储空间,因而是一种有效的彩色图像表示方法.%Inspired by an idea obtained from the triangle and the square packing problems, a new Gray-code-based color image representation method using a non-symmetry and anti-packing pattern representation model with the triangle and the square subpatterns (TSNAM) , also called the GTSNAM representation method, was proposed to improve the representation efficiency of color images by applying the Gray code and the bit-plane decomposition method. Also, a concrete algorithm of GTSNAM for color images was presented, and the storage structure, the total data amount, and the time and space complexities of the proposed algorithm were analyzed. By comparing the GTSNAM algorithm with those of the classic linear quadtree (LQT) and the latest TSNAM, which is not based on the Gray code, the theoretical and experimental results show that the former can greatly reduce the number of subpatterns or nodes and simultaneously save the storage space much more effectively than the latter ones. The GTSNAM algorithm is therefore shown to be a better method to represent color images.

  13. Structure-aware Local Sparse Coding for Visual Tracking

    KAUST Repository

    Qi, Yuankai

    2018-01-24

    Sparse coding has been applied to visual tracking and related vision problems with demonstrated success in recent years. Existing tracking methods based on local sparse coding sample patches from a target candidate and sparsely encode these using a dictionary consisting of patches sampled from target template images. The discriminative strength of existing methods based on local sparse coding is limited as spatial structure constraints among the template patches are not exploited. To address this problem, we propose a structure-aware local sparse coding algorithm which encodes a target candidate using templates with both global and local sparsity constraints. For robust tracking, we show local regions of a candidate region should be encoded only with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we design an effective template update scheme. Extensive experiments on challenging image sequences demonstrate the effectiveness of the proposed algorithm against numerous stateof- the-art methods.

  14. Integral cryptanalysis

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Wagner, David

    2002-01-01

    This paper considers a cryptanalytic approach called integral cryptanalysis. It can be seen as a dual to differential cryptanalysis and applies to ciphers not vulnerable to differential attacks. The method is particularly applicable to block ciphers which use bijective components only.......This paper considers a cryptanalytic approach called integral cryptanalysis. It can be seen as a dual to differential cryptanalysis and applies to ciphers not vulnerable to differential attacks. The method is particularly applicable to block ciphers which use bijective components only....

  15. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  16. ImageX: new and improved image explorer for astronomical images and beyond

    Science.gov (United States)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  17. Research on coding and decoding method for digital levels

    Energy Technology Data Exchange (ETDEWEB)

    Tu Lifen; Zhong Sidong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1mm when the measuring range is between 2m and 100m, which can meet practical needs.

  18. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  19. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    Directory of Open Access Journals (Sweden)

    Rached Tourki

    2010-01-01

    Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.

  20. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  1. The design of the CMOS wireless bar code scanner applying optical system based on ZigBee

    Science.gov (United States)

    Chen, Yuelin; Peng, Jian

    2008-03-01

    The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.

  2. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  3. A novel chaotic block cryptosystem based on iterating map with output-feedback

    International Nuclear Information System (INIS)

    Yang Degang; Liao Xiaofeng; Wang Yong; Yang Huaqian; Wei Pengcheng

    2009-01-01

    A novel method for encryption based on iterating map with output-feedback is presented in this paper. The output-feedback, instead of simply mixing the chaotic signal of the proposed chaotic cryptosystem with the cipher-text, is relating to previous cipher-text that is obtained through the plaintext and key. Some simulated experiments are performed to substantiate that our method can make cipher-text more confusion and diffusion and that the proposed method is practical whenever efficiency, cipher-text length or security is concerned.

  4. Monte Carlo simulation of a coded-aperture thermal neutron camera

    International Nuclear Information System (INIS)

    Dioszegi, I.; Salwen, C.; Forman, L.

    2011-01-01

    We employed the MCNPX Monte Carlo code to simulate image formation in a coded-aperture thermal-neutron camera. The camera, developed at Brookhaven National Laboratory (BNL), consists of a 20 x 17 cm"2 active area "3He-filled position-sensitive wire chamber in a cadmium enclosure box. The front of the box is a coded-aperture cadmium mask (at present with three different resolutions). We tested the detector experimentally with various arrangements of moderated point-neutron sources. The purpose of using the Monte Carlo modeling was to develop an easily modifiable model of the device to predict the detector's behavior using different mask patterns, and also to generate images of extended-area sources or large numbers (up to ten) of them, that is important for nonproliferation and arms-control verification, but difficult to achieve experimentally. In the model, we utilized the advanced geometry capabilities of the MCNPX code to simulate the coded aperture mask. Furthermore, the code simulated the production of thermal neutrons from fission sources surrounded by a thermalizer. With this code we also determined the thermal-neutron shadow cast by the cadmium mask; the calculations encompassed fast- and epithermal-neutrons penetrating into the detector through the mask. Since the process of signal production in "3He-filled position-sensitive wire chambers is well known, we omitted this part from our modeling. Simplified efficiency values were used for the three (thermal, epithermal, and fast) neutron-energy regions. Electronic noise and the room's background were included as a uniform irradiation component. We processed the experimental- and simulated-images using identical LabVIEW virtual instruments. (author)

  5. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2016-03-01

    Full Text Available High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device or CMOS (complementary metal oxide semiconductor camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second gain in temporal resolution by using a 25 fps camera.

  6. A panoramic coded aperture gamma camera for radioactive hotspots localization

    Science.gov (United States)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  7. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  8. Reduction and coding of synthetic aperture radar data with Fourier transforms

    Science.gov (United States)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  9. APPC - A new standardised coding system for trans-organisational PACS retrieval

    International Nuclear Information System (INIS)

    Fruehwald, F.; Lindner, A.; Mostbeck, G.; Hruby, W.; Fruehwald-Pallamar, J.

    2010-01-01

    As part of a general strategy to integrate the health care enterprise, Austria plans to connect the Picture Archiving and Communication Systems (PACS) of all radiological institutions into a nationwide network. To facilitate the search for relevant correlative imaging data in the PACS of different organisations, a coding system was compiled for all radiological procedures and necessary anatomical details. This code, called the Austrian PACS Procedure Code (APPC), was granted the status of a standard under HL7. Examples are provided of effective coding and filtering when searching for relevant imaging material using the APPC, as well as the planned process for future adjustments of the APPC. The implementation and how the APPC will fit into the future electronic environment, which will include an electronic health act for all citizens in Austria, are discussed. A comparison to other nationwide electronic health record projects and coding systems is given. Limitations and possible use in physical storage media are contemplated. (orig.)

  10. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  11. Spatially modulated imaging system

    International Nuclear Information System (INIS)

    Barrett, H.H.

    1975-01-01

    Noncoherent radiation, such as x-rays, is spatially coded, directed through an object and spatially detected to form a spatially coded pattern, from which an image of the object may be reconstructed. The x-ray source may be formed by x-ray fluorescence and substration of the holographic images formed by two sources having energy levels predominantly above and below the maximum absorption range of an agent in the object may be used to enhance contrast in the reproduced image. (Patent Office Record)

  12. Hiding a Covert Digital Image by Assembling the RSA Encryption Method and the Binary Encoding Method

    Directory of Open Access Journals (Sweden)

    Kuang Tsan Lin

    2014-01-01

    Full Text Available The Rivest-Shamir-Adleman (RSA encryption method and the binary encoding method are assembled to form a hybrid hiding method to hide a covert digital image into a dot-matrix holographic image. First, the RSA encryption method is used to transform the covert image to form a RSA encryption data string. Then, all the elements of the RSA encryption data string are transferred into binary data. Finally, the binary data are encoded into the dot-matrix holographic image. The pixels of the dot-matrix holographic image contain seven groups of codes used for reconstructing the covert image. The seven groups of codes are identification codes, covert-image dimension codes, covert-image graylevel codes, pre-RSA bit number codes, RSA key codes, post-RSA bit number codes, and information codes. The reconstructed covert image derived from the dot-matrix holographic image and the original covert image are exactly the same.

  13. Large-Scale Analysis of Remote Code Injection Attacks in Android Apps

    OpenAIRE

    Choi, Hyunwoo; Kim, Yongdae

    2018-01-01

    It is pretty well known that insecure code updating procedures for Android allow remote code injection attack. However, other than codes, there are many resources in Android that have to be updated, such as temporary files, images, databases, and configurations (XML and JSON). Security of update procedures for these resources is largely unknown. This paper investigates general conditions for remote code injection attacks on these resources. Using this, we design and implement a static detecti...

  14. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    Science.gov (United States)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  15. A Degree Distribution Optimization Algorithm for Image Transmission

    Science.gov (United States)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  16. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  17. Analysis of the gammaholographic image formation

    International Nuclear Information System (INIS)

    Fonroget, J.; Roucayrol, J.C.; Perrin, J.; Belvaux, Y.; Paris-11 Univ., 91 - Orsay

    1975-01-01

    Gammaholography, or coded opening gammagraphy, is a new gammagraphic method in which the standard collimators are replaced by one or more modulator screens placed between the detector and the radioactive object. The recording obtained is a coded image or incoherent hologram which contains three-dimensional information on the object and can be decoded analogically in a very short time. The formation of the image has been analyzed in the coding and optical decoding phases in the case of a single coding screen modulated according to a Fresnel zoned lattice. The analytical expression established for the modulation transfer function (MTF) of the system can be used to study, by computerized simulation, the influence of the number of zones on the quality of the image [fr

  18. A Hybrid DWT-SVD Image-Coding System (HDWTSVD for Color Images

    Directory of Open Access Journals (Sweden)

    Humberto Ochoa

    2003-04-01

    Full Text Available In this paper, we propose the HDWTSVD system to encode color images. Before encoding, the color components (RGB are transformed into YCbCr. Cb and Cr components are downsampled by a factor of two, both horizontally and vertically, before sending them through the encoder. A criterion based on the average standard deviation of 8x8 subblocks of the Y component is used to choose DWT or SVD for all the components. Standard test images are compressed based on the proposed algorithm.

  19. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    Science.gov (United States)

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  20. Speckle Reduction for Ultrasonic Imaging Using Frequency Compounding and Despeckling Filters along with Coded Excitation and Pulse Compression

    Directory of Open Access Journals (Sweden)

    Joshua S. Ullom

    2012-01-01

    Full Text Available A method for improving the contrast-to-noise ratio (CNR while maintaining the −6 dB axial resolution of ultrasonic B-mode images is proposed. The technique proposed is known as eREC-FC, which enhances a recently developed REC-FC technique. REC-FC is a combination of the coded excitation technique known as resolution enhancement compression (REC and the speckle-reduction technique frequency compounding (FC. In REC-FC, image CNR is improved but at the expense of a reduction in axial resolution. However, by compounding various REC-FC images made from various subband widths, the tradeoff between axial resolution and CNR enhancement can be extended. Further improvements in CNR can be obtained by applying postprocessing despeckling filters to the eREC-FC B-mode images. The despeckling filters evaluated were the following: median, Lee, homogeneous mask area, geometric, and speckle-reducing anisotropic diffusion (SRAD. Simulations and experimental measurements were conducted with a single-element transducer (f/2.66 having a center frequency of 2.25 MHz and a −3 dB bandwidth of 50%. In simulations and experiments, the eREC-FC technique resulted in the same axial resolution that would be typically observed with conventional excitation with a pulse. Moreover, increases in CNR of 348% were obtained in experiments when comparing eREC-FC with a Lee filter to conventional pulsing methods.

  1. Color encryption scheme based on adapted quantum logistic map

    Science.gov (United States)

    Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.

    2014-04-01

    This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.

  2. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  3. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  4. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  5. Imaging with rotating slit apertures and rotating collimators

    International Nuclear Information System (INIS)

    Gindi, G.R.; Arendt, J.; Barrett, H.H.; Chiu, M.Y.; Ervin, A.; Giles, C.L.; Kujoory, M.A.; Miller, E.L.; Simpson, R.G.

    1982-01-01

    The statistical quality of conventional nuclear medical imagery is limited by the small signal collect through low-efficiency conventional apertures. Coded-aperture imaging overcomes this by employing a two-step process in which the object is first efficiently detected as an ''encoded'' form which does not resemble the object, and then filtered (or ''decoded'') to form an image. We present here the imaging properties of a class of time-modulated coded apertures which, unlike most coded apertures, encode projections of the object rather than the object itself. These coded apertures can reconstruct a volume object nontomographically, tomographically (one plane focused), or three-dimensionally. We describe a new decoding algorithm that reconstructs the object from its planar projections. Results of noise calculations are given, and the noise performance of these coded-aperture systems is compared to that of conventional counterparts. A hybrid slit-pinhole system which combines the imaging advantages of a rotating slit and a pinhole is described. A new scintillation detector which accurately measures the position of an event in one dimension only is presented, and its use in our coded-aperture system is outlined. Finally, results of imaging test objects and animals are given

  6. Imaging with rotating slit apertures and rotating collimators

    International Nuclear Information System (INIS)

    Gindi, G.R.; Arendt, J.; Barrett, H.H.; Chiu, M.Y.; Ervin, A.; Giles, C.L.; Kujoory, M.A.; Miller, E.L.; Simpson, R.G.

    1982-01-01

    The statistical quality of conventional nuclear medical imagery is limited by the small signal collected through low-efficiency conventional apertures. Coded-aperture imaging overcomes this by employing a two-step process in which the object is first efficiently detected as an encoded form which does not resemble the object, and then filtered (or decoded) to form an image. We present here the imaging properties of a class of time-modulated coded apertures which, unlike most coded apertures, encode projections of the object rather than the object itself. These coded apertures can reconstruct a volume object nontomographically, tomographically (one plane focused), or three-dimensionally. We describe a new decoding algorithm that reconstructs the object from its planar projections. Results of noise calculations are given, and the noise performance of these coded-aperture systems is compared to that of conventional counterparts. A hybrid slit-pinhole system which combines the imaging advantages of a rotating slit and a pinhole is described. A new scintillation detector which accurately measures the position of an event in one dimension only is presented, and its use in our coded-aperture system is outlined. Finally, results of imaging test objects and animals are given

  7. Comparison of TITAN hybrid deterministic transport code and MCNP5 for simulation of SPECT

    International Nuclear Information System (INIS)

    Royston, K.; Haghighat, A.; Yi, C.

    2010-01-01

    Traditionally, Single Photon Emission Computed Tomography (SPECT) simulations use Monte Carlo methods. The hybrid deterministic transport code TITAN has recently been applied to the simulation of a SPECT myocardial perfusion study. The TITAN SPECT simulation uses the discrete ordinates formulation in the phantom region and a simplified ray-tracing formulation outside of the phantom. A SPECT model has been created in the Monte Carlo Neutral particle (MCNP)5 Monte Carlo code for comparison. In MCNP5 the collimator is directly modeled, but TITAN instead simulates the effect of collimator blur using a circular ordinate splitting technique. Projection images created using the TITAN code are compared to results using MCNP5 for three collimator acceptance angles. Normalized projection images for 2.97 deg, 1.42 deg and 0.98 deg collimator acceptance angles had maximum relative differences of 21.3%, 11.9% and 8.3%, respectively. Visually the images are in good agreement. Profiles through the projection images were plotted to find that the TITAN results followed the shape of the MCNP5 results with some differences in magnitude. A timing comparison on 16 processors found that the TITAN code completed the calculation 382 to 2787 times faster than MCNP5. Both codes exhibit good parallel performance. (author)

  8. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  9. Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments

    Directory of Open Access Journals (Sweden)

    Omar Lopez-Rincon

    2017-01-01

    Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.

  10. DISCURSIVE ACTUALIZATION OF ETHNO-LINGUOCULTURAL CODE IN ENGLISH GLUTTONY

    Directory of Open Access Journals (Sweden)

    Nikishkova Mariya Sergeevna

    2014-11-01

    Full Text Available The article presents the overview of linguistic research on gastronomic / gluttony communicative environment as ethnocultural phenomenon from the standpoint of conceptology, discourse study and linguosemiotics. The authors study the linguosemiotic encoding / decoding in the English gastronomic (gluttony discourse. The peculiarities of gastronomic gluttonyms "immersion" into everyday communication are studied. The anglophone ethnicities are revealed and different ways of gluttony texts (including the precedent ones formation are investigated. The linguosemiotic parameters of ethnocultural (anglophone gastronomic coded communication are established, their discursive characteristics are identified. It is determined that in English gastronomic communication, the discursive actualization of ethno-linguocultural code has a dynamic nature; the constitutive features of gastronomic discourse have symbolic (semiotic basics and are connected with such semiotic categories as code, encoding, decoding. It was found that food is semiotic in its origin and represents the cultural code. It was revealed that the semiosis of English gastronomic text is regularly filled with the codes of traditional "English-likeness" (ethnic term by Roland Barthes expressed by gluttonyms. "Nationality" code is detected through the names of products specific to certain areas; national identity of ethnic code also allows highlighting ways of dish garnishing and serving, typical characteristics of particular local preparation methods. The authors analyze the "lingualization" of food images having an ambivalent character, determined, firstly, by food signs (gluttonyms which structure the common space of gastronomic discourse and provide it with ethnic linguocultural food source; secondly, by immerging formed images into a specific ethnic code that is decoded in gastronomic discourse unfolding. The precedent texts accumulate ethnic information supplying adequate gastronomic worldview

  11. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests

    Directory of Open Access Journals (Sweden)

    Aidan O'Mara

    2017-11-01

    Full Text Available Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.

  12. Password Authentication Based on Fractal Coding Scheme

    Directory of Open Access Journals (Sweden)

    Nadia M. G. Al-Saidi

    2012-01-01

    Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.

  13. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  14. Multi-information fusion sparse coding with preserving local structure for hyperspectral image classification

    Science.gov (United States)

    Wei, Xiaohui; Zhu, Wen; Liao, Bo; Gu, Changlong; Li, Weibiao

    2017-10-01

    The key question of sparse coding (SC) is how to exploit the information that already exists to acquire the robust sparse representations (SRs) of distinguishing different objects for hyperspectral image (HSI) classification. We propose a multi-information fusion SC framework, which fuses the spectral, spatial, and label information in the same level, to solve the above question. In particular, pixels from disjointed spatial clusters, which are obtained by cutting the given HSI in space, are individually and sparsely encoded. Then, due to the importance of spatial structure, graph- and hypergraph-based regularizers are enforced to motivate the obtained representations smoothness and to preserve the local consistency for each spatial cluster. The latter simultaneously considers the spectrum, spatial, and label information of multiple pixels that have a great probability with the same label. Finally, a linear support vector machine is selected as the final classifier with the learned SRs as input. Experiments conducted on three frequently used real HSIs show that our methods can achieve satisfactory results compared with other state-of-the-art methods.

  15. LSB-Based Steganography Using Reflected Gray Code

    Science.gov (United States)

    Chen, Chang-Chu; Chang, Chin-Chen

    Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.

  16. A Message Without a Code?

    Directory of Open Access Journals (Sweden)

    Tom Conley

    1981-01-01

    Full Text Available The photographic paradox is said to be that of a message without a code, a communication lacking a relay or gap essential to the process of communication. Tracing the recurrence of Barthes's definition in the essays included in Image/Music/Text and in La Chambre claire , this paper argues that Barthes's definition is platonic in its will to dematerialize the troubling — graphic — immediacy of the photograph. He writes of the image in order to flee its signature. As a function of media, his categories are written in order to be insufficient and inadequate; to maintain an ineluctable difference between language heard and letters seen; to protect an idiom of loss which the photograph disallows. The article studies the strategies of his definition in «The Photographic Paradox» as instrument of abstraction, opposes the notion of code, in an aural sense, to audio-visual markers of closed relay in advertising, and critiques the layout and order of La Chambre claire in respect to Barthes's ideology of absence.

  17. Electromagnetic reprogrammable coding-metasurface holograms.

    Science.gov (United States)

    Li, Lianlin; Jun Cui, Tie; Ji, Wei; Liu, Shuo; Ding, Jun; Wan, Xiang; Bo Li, Yun; Jiang, Menghua; Qiu, Cheng-Wei; Zhang, Shuang

    2017-08-04

    Metasurfaces have enabled a plethora of emerging functions within an ultrathin dimension, paving way towards flat and highly integrated photonic devices. Despite the rapid progress in this area, simultaneous realization of reconfigurability, high efficiency, and full control over the phase and amplitude of scattered light is posing a great challenge. Here, we try to tackle this challenge by introducing the concept of a reprogrammable hologram based on 1-bit coding metasurfaces. The state of each unit cell of the coding metasurface can be switched between '1' and '0' by electrically controlling the loaded diodes. Our proof-of-concept experiments show that multiple desired holographic images can be realized in real time with only a single coding metasurface. The proposed reprogrammable hologram may be a key in enabling future intelligent devices with reconfigurable and programmable functionalities that may lead to advances in a variety of applications such as microscopy, display, security, data storage, and information processing.Realizing metasurfaces with reconfigurability, high efficiency, and control over phase and amplitude is a challenge. Here, Li et al. introduce a reprogrammable hologram based on a 1-bit coding metasurface, where the state of each unit cell of the coding metasurface can be switched electrically.

  18. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  19. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...

  20. Ancestry of indirect techniques for X-ray imaging

    International Nuclear Information System (INIS)

    Mertz, L.

    1989-01-01

    Historical citations concerning the origins of coded-aperture imaging are corrected. Another scheme is presented for synthetic indirect imaging to overcome certain shortcomings of simple coded apertures. Pairs of Fresnel zone patterns are used to create moire patterns that can be Fourier transformed for image reconstruction. It is also conjectured that image reconstructions that are constrained to be nonnegative should overcome certain complaints concerning indirect imaging. 20 refs