WorldWideScience

Sample records for cryptographic hash algorithm

  1. A secured Cryptographic Hashing Algorithm

    CERN Document Server

    Mohanty, Rakesh; Bishi, Sukant kumar

    2010-01-01

    Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm th...

  2. An Efficient Cryptographic Hash Algorithm (BSA)

    CERN Document Server

    Mukherjee, Subhabrata; Laha, Anirban

    2012-01-01

    Recent cryptanalytic attacks have exposed the vulnerabilities of some widely used cryptographic hash functions like MD5 and SHA-1. Attacks in the line of differential attacks have been used to expose the weaknesses of several other hash functions like RIPEMD, HAVAL. In this paper we propose a new efficient hash algorithm that provides a near random hash output and overcomes some of the earlier weaknesses. Extensive simulations and comparisons with some existing hash functions have been done to prove the effectiveness of the BSA, which is an acronym for the name of the 3 authors.

  3. PROPERTIES AND APPROACH OF CRYPTOGRAPHIC HASH ALGORITHMS

    Directory of Open Access Journals (Sweden)

    T.LALITHA

    2010-06-01

    Full Text Available The importance of hash functions for protecting the authenticity of information is demonstrated. Applications include integrity protection, conventional message authentication and digital signatures. An overview is given of the study of basic building blocks of cryptographic hash functions leads to the study of the cryptographic properties of Boolean functions and the information theoretic approach to authentication is described. An overview is given of the complexity theoretic definitions and constructions .New criteria are defined and functions satisfying new and existing criteria are studied.

  4. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Thomsen, Søren Steffen

    2009-01-01

    Cryptographic hash functions are commonly used in many different areas of cryptography: in digital signatures and in public-key cryptography, for password protection and message authentication, in key derivation functions, in pseudo-random number generators, etc. Recently, cryptographic hash...... well-known designs, and also some design and cryptanalysis in which the author took part. The latter includes a construction method for hash functions and four designs, of which one was submitted to the SHA-3 hash function competition, initiated by the U.S. standardisation body NIST. It also includes...

  5. Cryptographic Hash functions - a review

    Directory of Open Access Journals (Sweden)

    Rajeev Sobti

    2012-03-01

    Full Text Available Cryptographic Hash functions are used to achieve a number of security objectives. In this paper, we bring out the importance of hash functions, its various structures, design techniques, attacks and the progressive recent development in this field.

  6. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash...... important applications has also been analysed. This successful cryptanalysis of the standard hash functions has made National Institute of Standards and Technology (NIST), USA to initiate an international public competition to select the most secure and efficient hash function as the future hash function...... based MACs are reported. The goals of NIST's SHA-3 competition and its current progress are outlined....

  7. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash...

  8. Cryptographic hash functions. Trends and challenges

    Directory of Open Access Journals (Sweden)

    Rodica Tirtea

    2009-10-01

    Full Text Available Hash functions are important in cryptography due to their use in data integrity and message authentication. Different cryptographicimplementations rely on the performance and strength of hash functions to answer the need for integrity and authentication. This paper gives an overview of cryptographic hash functions used or evaluated today. Hash functions selected in NESSIE and CRYPTREC projects are shortly presented. SHA-3 selection initiative is alsointroduced.

  9. Implementation of cryptographic hash function SHA256 in C++

    Science.gov (United States)

    Shrivastava, Akash

    2012-02-01

    This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.

  10. Chaos-based hash function (CBHF) for cryptographic applications

    Energy Technology Data Exchange (ETDEWEB)

    Amin, Mohamed [Dept. of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511 (Egypt)], E-mail: mamin04@yahoo.com; Faragallah, Osama S. [Dept. of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952 (Egypt)], E-mail: osam_sal@yahoo.com; Abd El-Latif, Ahmed A. [Dept. of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511 (Egypt)], E-mail: ahmed_rahiem@yahoo.com

    2009-10-30

    As the core of cryptography, hash is the basic technique for information security. Many of the hash functions generate the message digest through a randomizing process of the original message. Subsequently, a chaos system also generates a random behavior, but at the same time a chaos system is completely deterministic. In this paper, an algorithm for one-way hash function construction based on chaos theory is introduced. Theoretical analysis and computer simulation indicate that the algorithm can satisfy all performance requirements of hash function in an efficient and flexible manner and secure against birthday attacks or meet-in-the-middle attacks, which is good choice for data integrity or authentication.

  11. SPONGENT: The Design Space of Lightweight Cryptographic Hashing

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knezevic, Miroslav; Leander, Gregor

    2013-01-01

    The design of secure yet efficiently implementable cryptographic algorithms is a fundamental problem of cryptography. Lately, lightweight cryptography--optimizing the algorithms to fit the most constrained environments--has received a great deal of attention, the recent research being mainly...

  12. Practical Attacks on AES-like Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Kölbl, Stefan; Rechberger, Christian

    2015-01-01

    Despite the great interest in rebound attacks on AES-like hash functions since 2009, we report on a rather generic, albeit keyschedule-dependent, algorithmic improvement: A new message modification technique to extend the inbound phase, which even for large internal states makes it possible to dr...

  13. The FPGA realization of the general cellular automata based cryptographic hash functions: Performance and effectiveness

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2014-01-01

    Full Text Available In the paper the author considers hardware implementation of the GRACE-H family general cellular automata based cryptographic hash functions. VHDL is used as a language and Altera FPGA as a platform for hardware implementation. Performance and effectiveness of the FPGA implementations of GRACE-H hash functions were compared with Keccak (SHA-3, SHA-256, BLAKE, Groestl, JH, Skein hash functions. According to the performed tests, performance of the hardware implementation of GRACE-H family hash functions significantly (up to 12 times exceeded performance of the hardware implementation of previously known hash functions, and effectiveness of that hardware implementation was also better (up to 4 times.

  14. Analysis and Implementation of Cryptographic Hash Functions in Programmable Logic Devices

    Directory of Open Access Journals (Sweden)

    Tautvydas Brukštus

    2016-06-01

    Full Text Available In this day’s world, more and more focused on data pro-tection. For data protection using cryptographic science. It is also important for the safe storage of passwords for this uses a cryp-tographic hash function. In this article has been selected the SHA-256 cryptographic hash function to implement and explore, based on fact that it is now a popular and safe. SHA-256 cryp-tographic function did not find any theoretical gaps or conflict situations. Also SHA-256 cryptographic hash function used cryptographic currencies. Currently cryptographic currency is popular and their value is high. For the measurements have been chosen programmable logic integrated circuits as they less effi-ciency then ASIC. We chose Altera Corporation produced prog-rammable logic integrated circuits. Counting speed will be inves-tigated by three programmable logic integrated circuit. We will use programmable logic integrated circuits belong to the same family, but different generations. Each programmable logic integ-rated circuit made using different dimension technology. Choo-sing these programmable logic integrated circuits: EP3C16, EP4CE115 and 5CSEMA5F31. To compare calculations perfor-mances parameters are provided in the tables and graphs. Re-search show the calculation speed and stability of different prog-rammable logic circuits.

  15. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  16. A secure and efficient cryptographic hash function based on NewFORK-256

    Directory of Open Access Journals (Sweden)

    Harshvardhan Tiwari

    2012-11-01

    Full Text Available Cryptographic hash functions serve as a fundamental building block of information security and are used in numerous security applications and protocols such as digital signature schemes, construction of MAC and random number generation, for ensuring data integrity and data origin authentication. Researchers have noticed serious security flaws and vulnerabilities in most widely used MD and SHA family hash functions. As a result hash functions from FORK family with longer digest value were considered as good alternatives for MD5 and SHA-1, but recent attacks against these hash functions have highlighted their weaknesses. In this paper we propose a dedicated hash function MNF-256 based on the design principle of NewFORK-256. It takes 512 bit message blocks and generates 256 bit hash value. A random sequence is added as an additional input to the compression function of MNF-256. Three branch parallel structure and secure compression function make MNF-256 an efficient, fast and secure hash function. Various simulation results indicate that MNF-256 is immune to common cryptanalytic attacks and faster than NewFORK-256.

  17. Algorithms for improved performance in cryptographic protocols.

    Energy Technology Data Exchange (ETDEWEB)

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  18. Hardware design for Hash functions

    Science.gov (United States)

    Lee, Yong Ki; Knežević, Miroslav; Verbauwhede, Ingrid M. R.

    Due to its cryptographic and operational key features such as the one-way function property, high speed and a fixed output size independent of input size the hash algorithm is one of the most important cryptographic primitives. A critical drawback of most cryptographic algorithms is the large computational overhead. This is getting more critical since the data amount to process or communicate is increasing a lot. In many cases, a proper use of the hash algorithm reduces the computational overhead. Digital signature generation and the message authentication are the most common applications of the hash algorithms. The increasing data size also motivates hardware designers to have a throughput optimal architecture for a given hash algorithm. In this chapter, some popular hash algorithms and their cryptanalysis are briefly introduced, and a design methodology for throughput optimal architectures of MD4-based hash algorithms is described in detail.

  19. Secure OFDM communications based on hashing algorithms

    Science.gov (United States)

    Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele

    2007-10-01

    In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.

  20. Research of the Kernel Operator Library Based on Cryptographic Algorithm

    Institute of Scientific and Technical Information of China (English)

    王以刚; 钱力; 黄素梅

    2001-01-01

    The variety of encryption mechanism and algorithms which were conventionally used have some limitations.The kernel operator library based on Cryptographic algorithm is put forward. Owing to the impenetrability of algorithm, the data transfer system with the cryptographic algorithm library has many remarkable advantages in algorithm rebuilding and optimization,easily adding and deleting algorithm, and improving the security power over the traditional algorithm. The user can choose any one in all algorithms with the method against any attack because the cryptographic algorithm library is extensible.

  1. A Dynamic Hashing Algorithm Suitable for Embedded System

    Directory of Open Access Journals (Sweden)

    Li Jianwei

    2013-06-01

    Full Text Available With the increasing of the data numbers, the linear hashing will be a lot of overflow blocks result from Data skew and the index size of extendible hash will surge so as to waste too much memory. This lead to the above two Typical Dynamic hashing algorithm don’t suitable for embedded system that need certain real-time requirements and memory resources are very scarce. To solve this problem, this paper was proposed a dynamic hashing algorithm suitable for embedded system combining with the characteristic of extendible hashing and linear hashing.it is no overflow buckets and the index size is proportional to the adjustment number.

  2. Analysis of a wavelet-based robust hash algorithm

    Science.gov (United States)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  3. Cryptographic protocol security analysis based on bounded constructing algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.

  4. Cryptanalysis of Tav-128 hash function

    DEFF Research Database (Denmark)

    Kumar, Ashish; Sanadhya, Somitra Kumar; Gauravaram, Praveen

    2010-01-01

    Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag...... a 64-bit permutation from 32-bit messages. This could be a useful light weight primitive for future RFID protocols....

  5. Cryptanalysis of Tav-128 hash function

    DEFF Research Database (Denmark)

    Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag...... a 64-bit permutation from 32-bit messages. This could be a useful light weight primitive for future RFID protocols....

  6. Indexing Algorithm Based on Improved Sparse Local Sensitive Hashing

    Directory of Open Access Journals (Sweden)

    Yiwei Zhu

    2014-01-01

    Full Text Available In this article, we propose a new semantic hashing algorithm to address the new-merging problems such as the difficulty in similarity measurement brought by high-dimensional data. Based on local sensitive hashing and spectral hashing, we introduce sparse principal component analysis (SPCA to reduce the dimension of the data set which exclude the redundancy in the parameter list, and thus make high dimensional indexing and retrieval faster and more efficient. In the meanwhile, we employ Boosting algorithm in machine learning to determine the threshold of hashing, so as to improve its adaptive ability to real data and extend its range of application. According to experiments, this method not only has satisfying performance on multimedia data sets such as images and texts, but also performs better than the common indexing methods. 

  7. An online algorithm for generating fractal hash chains applied to digital chains of custody

    CERN Document Server

    Bradford, Phillip G

    2007-01-01

    This paper gives an online algorithm for generating Jakobsson's fractal hash chains. Our new algorithm compliments Jakobsson's fractal hash chain algorithm for preimage traversal since his algorithm assumes the entire hash chain is precomputed and a particular list of Ceiling(log n) hash elements or pebbles are saved. Our online algorithm for hash chain traversal incrementally generates a hash chain of n hash elements without knowledge of n before it starts. For any n, our algorithm stores only the Ceiling(log n) pebbles which are precisely the inputs for Jakobsson's amortized hash chain preimage traversal algorithm. This compact representation is useful to generate, traverse, and store a number of large digital hash chains on a small and constrained device. We also give an application using both Jakobsson's and our new algorithm applied to digital chains of custody for validating dynamically changing forensics data.

  8. Research of Cryptographic Algorithms Applied in Electronic Commerce

    Directory of Open Access Journals (Sweden)

    Cheng Zengping

    2014-02-01

    Full Text Available With the developments of network communication, electronic commerce plays a more and more role in the trade business and industry structure. The requirement for the electronic commerce turns to be higher. In this study, we study current status about the cryptographic algorithms exploited in electronic commerce. We discuss the advantages and disadvantages about the symmetric and asymmetric algorithms and improve them. Then we give a new scheme that combines the improved symmetric algorithm and asymmetric algorithm. We give sound reasons to explain why our scheme is more secure. Finally, we carry the experiments to show the security of our scheme.

  9. MiMC: Efficient encryption and cryptographic hashing with minimal multiplicative complexity

    DEFF Research Database (Denmark)

    Albrecht, Martin; Grassi, Lorenzo; Rechberger, Christian;

    2016-01-01

    We explore cryptographic primitives with low multiplicative complexity. This is motivated by recent progress in practical applications of secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge proofs (ZK) where primitives from symmetric cryptography are needed...... and where linear computations are, compared to non-linear operations, essentially “free”. Starting with the cipher design strategy “LowMC” from Eurocrypt 2015, a number of bitoriented proposals have been put forward, focusing on applications where the multiplicative depth of the circuit describing...... a new attack vector that outperforms others in relevant settings. Due to its very low number of multiplications, the design lends itself well to a large class of applications, especially when the depth does not matter but the total number of multiplications in the circuit dominates all aspects...

  10. A Secure Hash Function MD-192 With Modified Message Expansion

    CERN Document Server

    Tiwari, Harshvardhan

    2010-01-01

    Cryptographic hash functions play a central role in cryptography. Hash functions were introduced in cryptology to provide message integrity and authentication. MD5, SHA1 and RIPEMD are among the most commonly used message digest algorithm. Recently proposed attacks on well known and widely used hash functions motivate a design of new stronger hash function. In this paper a new approach is presented that produces 192 bit message digest and uses a modified message expansion mechanism which generates more bit difference in each working variable to make the algorithm more secure. This hash function is collision resistant and assures a good compression and preimage resistance.

  11. 基于时滞混沌系统的带密钥Hash函数的设计与分析%Design and Analysis of a Cryptographic Hash Function Based on Time-Delay Chaotic System

    Institute of Scientific and Technical Information of China (English)

    徐杰; 杨娣洁; 隆克平

    2011-01-01

    An algorithm of cryptographic hash function based on time-delay chaotic system is presented in this paper. In this algorithm, initial message is modulated into time-delay chaotic iteration, and the Hash value can be calculated by a HMAC-MD5 algorithm. Thus, every bit of this Hash value is correlative with initial message,and this Hash value is very sensitive to micro changes of the initial message or the initial condition of chaotic system. By theory analyses and simulations, we obtain that the Hash value has irregularity and diffusion properties,and the parameter space is augmented because of the properties of chaos. The nonlinear relation between hash value and initial message can be effectively against linear analysis. Therefore, this Hash function based on time-delay chaotic system can get better anti-attack and anti-collision capacity.%提出了一种基于时滞混沌系统的带密钥Hash函数算法,该算法利用时滞混沌系统非线性动力学特性,将需要传送的明文信息调制在时滞混沌迭代的轨迹中,并通过HMAC-MD5算法计算得出Hash值,Hash值的每个比特都与需传送的明文信息相关.该算法使Hash值对明文信息及时滞混沌迭代初始条件的微小变化高度敏感.理论分析和仿真结果均表明,该算法在保证Hash值的混乱性和散布性的同时,由于其混沌特性的加入而增大了参数空间,并且混沌Hash值与初始明文信息之间的非线性关系可以有效地抵御线性分析.因此,本文设计的基于时滞混沌系统的Hash函数算法具有很好的安全性、抗碰撞性和抗攻击能力,在数字签名等认证技术领域有很好的应用前景.

  12. Implimentation of Cryptographic Algorithm for GSM and UMTS Systems.

    Directory of Open Access Journals (Sweden)

    Alpesh R. Sankaliya

    2011-12-01

    Full Text Available Due to extremely high demand of mobile phones among people, over the years there has been a great demand for the support of various applications and security services. Cryptographic algorithms used by Mobile Subscribers to protect the privacy of their cellular voice and data communication. Ciphering provides the mean to regain control over privacy and authentication. A5/x are the encryption algorithms used in order to ensure privacy of conversations on mobile phones. A5/3 encryption algorithm used for 3G and GEA3 encryption algorithm used for GPRS. f8 is confidentiality algorithms developed by 3GPP used in UMTS System. The following paper is based on simulation of A5/3 and f8 algorithms.

  13. 76 FR 11433 - Federal Transition To Secure Hash Algorithm (SHA)-256

    Science.gov (United States)

    2011-03-02

    ... Hash Algorithm (SHA)-256 AGENCY: Department of Defense (DoD), General Services Administration (GSA... agencies about ways for the acquisition community to transition to Secure Hash Algorithm SHA-256. SHA-256... Hash Algorithm SHA-256'' in all correspondence related to this public meeting. FOR FURTHER...

  14. A Review of Cryptographic Algorithms in Network Security

    Directory of Open Access Journals (Sweden)

    B.Nithya

    2016-02-01

    Full Text Available In the excellent growth of internet environment, there is a challenge to send data in secure. Security means sending information without any modification or hacking done by unauthorized users. The network security has the component of cryptography technique which acts like guard to the information. The general concept of cryptography is encryption and decryption. There are many cryptographic algorithms are used to send the information as cipher text which cannot be understand by the intruders. So experts have taken the existing algorithms to provide security over the network and they want to apply the benefits of those algorithms in the suitable places. First step of getting the help from algorithm is to be studied and compared their parameters. This paper presents a review that comparative study of algorithms taken by many authors.

  15. An update on the side channel cryptanalysis of MACs based on cryptographic hash functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2007-01-01

    into consideration. Next, we propose new hybrid NMAC/HMAC schemes for security against side channel attacks assuming that their underlying block cipher is ideal. We then show that M-NMAC, MDx-MAC and a variant of the envelope MAC scheme based on DM with an ideal block cipher are secure against DPA attacks.......Okeya has established that HMAC/NMAC implementations based on only Matyas-Meyer-Oseas (MMO) PGV scheme and his two refined PGV schemes are secure against side channel DPA attacks when the block cipher in these constructions is secure against these attacks. The significant result of Okeya's analysis...... is that the implementations of HMAC/NMAC with the Davies-Meyer (DM) compression function based hash functions such as MD5 and SHA-1 are vulnerable to side channel attacks. In this paper, first we show a partial key recovery attack on NMAC/HMAC based on Okeya's two refined PGV schemes by taking practical constraints...

  16. The LabelHash algorithm for substructure matching

    Directory of Open Access Journals (Sweden)

    Bryant Drew H

    2010-11-01

    Full Text Available Abstract Background There is an increasing number of proteins with known structure but unknown function. Determining their function would have a significant impact on understanding diseases and designing new therapeutics. However, experimental protein function determination is expensive and very time-consuming. Computational methods can facilitate function determination by identifying proteins that have high structural and chemical similarity. Results We present LabelHash, a novel algorithm for matching substructural motifs to large collections of protein structures. The algorithm consists of two phases. In the first phase the proteins are preprocessed in a fashion that allows for instant lookup of partial matches to any motif. In the second phase, partial matches for a given motif are expanded to complete matches. The general applicability of the algorithm is demonstrated with three different case studies. First, we show that we can accurately identify members of the enolase superfamily with a single motif. Next, we demonstrate how LabelHash can complement SOIPPA, an algorithm for motif identification and pairwise substructure alignment. Finally, a large collection of Catalytic Site Atlas motifs is used to benchmark the performance of the algorithm. LabelHash runs very efficiently in parallel; matching a motif against all proteins in the 95% sequence identity filtered non-redundant Protein Data Bank typically takes no more than a few minutes. The LabelHash algorithm is available through a web server and as a suite of standalone programs at http://labelhash.kavrakilab.org. The output of the LabelHash algorithm can be further analyzed with Chimera through a plugin that we developed for this purpose. Conclusions LabelHash is an efficient, versatile algorithm for large-scale substructure matching. When LabelHash is running in parallel, motifs can typically be matched against the entire PDB on the order of minutes. The algorithm is able to identify

  17. ForBild: efficient robust image hashing

    Science.gov (United States)

    Steinebach, Martin; Liu, Huajian; Yannikos, York

    2012-03-01

    Forensic analysis of image sets today is most often done with the help of cryptographic hashes due to their efficiency, their integration in forensic tools and their excellent reliability in the domain of false detection alarms. A drawback of these hash methods is their fragility to any image processing operation. Even a simple re-compression with JPEG results in an image not detectable. A different approach is to apply image identification methods, allowing identifying illegal images by e.g. semantic models or facing detection algorithms. Their common drawback is a high computational complexity and significant false alarm rates. Robust hashing is a well-known approach sharing characteristics of both cryptographic hashes and image identification methods. It is fast, robust to common image processing and features low false alarm rates. To verify its usability in forensic evaluation, in this work we discuss and evaluate the behavior of an optimized block-based hash.

  18. An enhanced dynamic hash TRIE algorithm for lexicon search

    Science.gov (United States)

    Yang, Lai; Xu, Lida; Shi, Zhongzhi

    2012-11-01

    Information retrieval (IR) is essential to enterprise systems along with growing orders, customers and materials. In this article, an enhanced dynamic hash TRIE (eDH-TRIE) algorithm is proposed that can be used in a lexicon search in Chinese, Japanese and Korean (CJK) segmentation and in URL identification. In particular, the eDH-TRIE algorithm is suitable for Unicode retrieval. The Auto-Array algorithm and Hash-Array algorithm are proposed to handle the auxiliary memory allocation; the former changes its size on demand without redundant restructuring, and the latter replaces linked lists with arrays, saving the overhead of memory. Comparative experiments show that the Auto-Array algorithm and Hash-Array algorithm have better spatial performance; they can be used in a multitude of situations. The eDH-TRIE is evaluated for both speed and storage and compared with the naïve DH-TRIE algorithms. The experiments show that the eDH-TRIE algorithm performs better. These algorithms reduce memory overheads and speed up IR.

  19. A Compendium Over Cloud Computing Cryptographic Algorithms and Security Issues

    Directory of Open Access Journals (Sweden)

    Neha Mishra

    2015-01-01

    Full Text Available Cloud computing is an emerging and revolutionary approach towards the computing and becoming more risk prone than ever before. It is an evolutionary approach of using resources and services on demand and as per need of consumers. Cloud computing providing a platform rose on the Internet for usage of IT services and flexible infrastructure to the consumers and business. Deployment and management of services or resources are maintained by the third party. Whereas there are innumerable advantages to approaching the cloud computing, it also contains various issues such as confidentiality, Integrity, Authenticity and Privacy. One of the prominent barrier to adopt the cloud computing is security. This paper comprises the elaborated study on various security issues allied to cloud computing are presented by consolidating literature reviews on cryptographic algorithms used for data security.

  20. Hybrid Cryptographic Processor for Secure Communication Using FPGA

    OpenAIRE

    Savitha Raj.S; Merlin Sharmila.A; Poorinima Beneta.P

    2013-01-01

    Cryptographic hash functions are mainly used for the purpose of authentication and for integrity of messages. In this paper, we investigate high-speed, efficient hardware algorithm which is a combination of both RSA and BLAKE for providing privacy and security in data networks including encryption/decryption. Hash function- BLAKE is a new standard candidate algorithm; it is one of the finalists in the SHA-3 competition by NIST. RSA is the asymmetric public key cryptography system. Since this ...

  1. Robust hashing for 3D models

    Science.gov (United States)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  2. A Robust Image Hashing Algorithm Resistant Against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2013-12-01

    Full Text Available This paper proposes a robust image hashing method which is robust against common image processing attacks and geometric distortion attacks. In order to resist against geometric attacks, the log-polar mapping (LPM and contourlet transform are employed to obtain the low frequency sub-band image. Then the sub-band image is divided into some non-overlapping blocks, and low and middle frequency coefficients are selected from each block after discrete cosine transform. The singular value decomposition (SVD is applied in each block to obtain the first digit of the maximum singular value. Finally, the features are scrambled and quantized as the safe hash bits. Experimental results show that the algorithm is not only resistant against common image processing attacks and geometric distortion attacks, but also discriminative to content changes.

  3. A Novel Digital Signature Algorithm based on Biometric Hash

    Directory of Open Access Journals (Sweden)

    Shivangi Saxena

    2017-01-01

    Full Text Available Digital Signature protects the document`s integrity and binds the authenticity of the user who have signed. Present Digital Signature algorithm confirms authenticity but it does not ensure secrecy of the data. Techniques like encryption and decryption are needed to be used for this purpose. Biometric security has been a useful way for authentication and security as it provides a unique identity of the user. In this paper we have discussed the user authentication process and development of digital signatures. Authentication was based on hash functions which uses biometric features. Hash codes are being used to maintain the integrity of the document which is digitally signed. For security purpose, Encryption and Decryption techniques are used to develop a bio -cryptosystem. User information when gets concatenated with feature vector of biometric data, which actually justifies the sense of authentication. Various online or offline transaction where authenticity and integrity is the top most priority can make use of this development.

  4. Enhanced and Fast Face Recognition by Hashing Algorithm

    Directory of Open Access Journals (Sweden)

    M. Sharif

    2012-08-01

    Full Text Available This paper presents a face hashing technique for fast face recognition. The proposed technique employs the twoexisting algorithms, i.e., 2-D discrete cosine transformation and K-means clustering. The image has to go throughdifferent pre-processing phases and the two above-mentioned algorithms must be used in order to obtain the hashvalue of the face image. The searching process is increased by introducing a modified form of binary search. A newdatabase architecture called Facebases has also been introduced to further speedup the searching process.

  5. Metadata distribution algorithm based on directory hash in mass storage system

    Science.gov (United States)

    Wu, Wei; Luo, Dong-jian; Pei, Can-hao

    2008-12-01

    The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash (DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable operations. It enhances the scalability of mass storage system remarkably.

  6. RFID cryptographic protocol based on two-dimensional region Hash chain%基于二维区间Hash链的RFID安全协议

    Institute of Scientific and Technical Information of China (English)

    熊宛星; 薛开平; 洪佩琳; 麻常莎

    2011-01-01

    Due to the limitation of relevant devices, a lot of security problems exist in a radio frequency identification (RFID) system, one of the core technologies of the future internet of things (IOT). A new protocol based on the two-dimensional region (TDR) Hash chains was proposed after the core ideas of several typical RFID cryptographic protocols were analyzed. TDR could significantly improve the efficiency of database retrieval by identifying each Hash chain with region division. Moreover, a random number was introduced to further enhance the security of RFID systems.%作为未来物联网(IOT)的核心技术之一,无线射频识别(RFID)系统由于设备的局限性而存在许多安全问题.在分析几种典型安全协议核心思想的基础上,提出了基于二维区间(two-dimensional region,TDR) Hash链的安全协议.该协议以区间划分的方式标识各链,从而提高了数据库的检索效率;同时,由于在协议中引入了随机性,RFID系统的安全性得到了进一步增强.

  7. The hash function BLAKE

    CERN Document Server

    Aumasson, Jean-Philippe; Phan, Raphael; Henzen, Luca

    2014-01-01

    This is a comprehensive description of the cryptographic hash function BLAKE, one of the five final contenders in the NIST SHA3 competition, and of BLAKE2, an improved version popular among developers. It describes how BLAKE was designed and why BLAKE2 was developed, and it offers guidelines on implementing and using BLAKE, with a focus on software implementation.   In the first two chapters, the authors offer a short introduction to cryptographic hashing, the SHA3 competition, and BLAKE. They review applications of cryptographic hashing, they describe some basic notions such as security de

  8. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    Directory of Open Access Journals (Sweden)

    Rajeswari Sridhar

    2010-07-01

    Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser time complexity than dual-ternary based indexing The algorithms were also compared for their precision and recall in which multi-key hashing had a better recall than modified dual ternary indexing for the sample data considered.

  9. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    Directory of Open Access Journals (Sweden)

    Rajeswari Sridhar

    2010-07-01

    Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieveCarnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithmfor music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. Themodification in the dual ternary algorithm was essential to handle variable length query phrase and toaccommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted forCarnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternaryalgorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval inwhich features like MFCC, spectral flux, melody string and spectral centroid are used as features forindexing data into a hash table. The way in which collision resolution was handled by this hash table isdifferent than the normal hash table approaches. It was observed that multi-key hashing based retrievalhad a lesser time complexity than dual-ternary based indexing The algorithms were also compared fortheir precision and recall in which multi-key hashing had a better recall than modified dual ternaryindexing for the sample data considered.

  10. Study on An Absolute Non-Collision Hash and Jumping Table IP Classification Algorithms

    Institute of Scientific and Technical Information of China (English)

    SHANG Feng-jun; PAN Ying-jun

    2004-01-01

    In order to classify packet, we propose a novel IP classification based the non-collision hash and jumping table Trie-tree (NHJTTT) algorithm, which is based on non-collision hash Trie-tree and Lakshman and Stiliadis proposing a 2-dimensional classification algorithm (LS algorithm).The core of algorithm consists of two parts: structure the non-collision hash function, which is constructed mainly based on destination /source port and protocol type field so that the hash function can avoid space explosion problem; introduce jumping table Trie-tree based LS algorithm in order to reduce time complexity.The test results show that the classification rate of NHJTTT algorithm is up to 1 million packets per second and the maximum memory consumed is 9 MB for 10 000 rules.

  11. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    CERN Document Server

    Sridhar, Rajeswari; Karthiga, S; T, Geetha; 10.5121/ijaia.2010.1305

    2010-01-01

    In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser ...

  12. A brief history of cryptology and cryptographic algorithms

    CERN Document Server

    Dooley, John F

    2013-01-01

    The science of cryptology is made up of two halves. Cryptography is the study of how to create secure systems for communications. Cryptanalysis is the study of how to break those systems. The conflict between these two halves of cryptology is the story of secret writing. For over 2,000 years, the desire to communicate securely and secretly has resulted in the creation of numerous and increasingly complicated systems to protect one's messages. Yet for every system there is a cryptanalyst creating a new technique to break that system. With the advent of computers the cryptographer seems to final

  13. A Novel Block-DCT and PCA Based Image Perceptual Hashing Algorithm

    Directory of Open Access Journals (Sweden)

    Zeng Jie

    2013-01-01

    Full Text Available Image perceptual hashing finds applications in content indexing, large-scale image database management, certification and authentication and digital watermarking. We propose a Block-DCT and PCA based image perceptual hash in this article and explore the algorithm in the application of tamper detection. The main idea of the algorithm is to integrate color histogram and DCT coefficients of image blocks as perceptual feature, then to compress perceptual features as inter-feature with PCA, and to threshold to create a robust hash. The robustness and discrimination properties of the proposed algorithm are evaluated in detail. Experimental results show that the proposed image perceptual hash algorithm can effectively address the tamper detection problem with advantageous robustness and discrimination.

  14. Clustering Web Documents based on Efficient Multi-Tire Hashing Algorithm for Mining Frequent Termsets

    Directory of Open Access Journals (Sweden)

    Noha Negm

    2013-06-01

    Full Text Available Document Clustering is one of the main themes in text mining. It refers to the process of grouping documents with similar contents or topics into clusters to improve both availability and reliability of text mining applications. Some of the recent algorithms address the problem of high dimensionality of the text by using frequent termsets for clustering. Although the drawbacks of the Apriori algorithm, it still the basic algorithm for mining frequent termsets. This paper presents an approach for Clustering Web Documents based on Hashing algorithm for mining Frequent Termsets (CWDHFT. It introduces an efficient Multi-Tire Hashing algorithm for mining Frequent Termsets (MTHFT instead of Apriori algorithm. The algorithm uses new methodology for generating frequent termsets by building the multi-tire hash table during the scanning process of documents only one time. To avoid hash collision, Multi Tire technique is utilized in this proposed hashing algorithm. Based on the generated frequent termset the documents are partitioned and the clustering occurs by grouping the partitions through the descriptive keywords. By using MTHFT algorithm, the scanning cost and computational cost is improved moreover the performance is considerably increased and increase up the clustering process. The CWDHFT approach improved accuracy, scalability and efficiency when compared with existing clustering algorithms like Bisecting K-means and FIHC.

  15. Hash-tree反碰撞算法%Hash-tree Anti-collision Algorithm

    Institute of Scientific and Technical Information of China (English)

    张虹; 韩磊; 马海波

    2007-01-01

    针对EDFSA算法标签识别效率低以及二叉树搜索需检测碰撞准确位置等问题,提出了Hash-tree反碰撞算法.分析了算法的关键问题,确定了算法策略,进行了算法设计,证明了Hash-tree反碰撞算法识别效率期望值在36.8%~100%之间,优于EDFSA算法.仿真验证表明,该算法在识别效率方面有新突破,特别是在识别大量标签时优势明显.

  16. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    OpenAIRE

    2010-01-01

    In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is ...

  17. HF-hash : Hash Functions Using Restricted HFE Challenge-1

    CERN Document Server

    Dey, Dhananjoy; Gupta, Indranath Sen

    2009-01-01

    Vulnerability of dedicated hash functions to various attacks has made the task of designing hash function much more challenging. This provides us a strong motivation to design a new cryptographic hash function viz. HF-hash. This is a hash function, whose compression function is designed by using first 32 polynomials of HFE Challenge-1 with 64 variables by forcing remaining 16 variables as zero. HF-hash gives 256 bits message digest and is as efficient as SHA-256. It is secure against the differential attack proposed by Chabaud and Joux as well as by Wang et. al. applied to SHA-0 and SHA-1.

  18. Parallel algorithm for target recognition using a multiclass hash database

    Science.gov (United States)

    Uddin, Mosleh; Myler, Harley R.

    1998-07-01

    A method for recognition of unknown targets using large databases of model targets is discussed. Our approach is based on parallel processing of multi-class hash databases that are generated off-line. A geometric hashing technique is used on feature points of model targets to create each class database. Bit level coding is then performed to represent the models in an image format. Parallelism is achieved during the recognition phase. Feature points of an unknown target are passed to parallel processors each accessing an individual class database. Each processor reads a particular class of hash data base and indexes feature points of the unknown target. A simple voting technique is applied to determine the best match model with the unknown. The paper discusses our technique and the results from testing with unknown FLIR targets.

  19. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  20. Cryptographic framework for analyzing the privacy of recommender algorithms

    NARCIS (Netherlands)

    Tang, Qiang

    2012-01-01

    Recommender algorithms are widely used, ranging from traditional Video on Demand to a wide variety of Web 2.0 services. Unfortunately, the related privacy concerns have not received much attention. In this paper, we study the privacy concerns associated with recommender algorithms and present a cryp

  1. Modular Inverse Algorithms Without Multiplications for Cryptographic Applications

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available Hardware and algorithmic optimization techniques are presented to the left-shift, right-shift, and the traditional Euclidean-modular inverse algorithms. Theoretical arguments and extensive simulations determined the resulting expected running time. On many computational platforms these turn out to be the fastest known algorithms for moderate operand lengths. They are based on variants of Euclidean-type extended GCD algorithms. On the considered computational platforms for operand lengths used in cryptography, the fastest presented modular inverse algorithms need about twice the time of modular multiplications, or even less. Consequently, in elliptic curve cryptography delaying modular divisions is slower (affine coordinates are the best and the RSA and ElGamal cryptosystems can be accelerated.

  2. Modular Inverse Algorithms Without Multiplications for Cryptographic Applications

    Directory of Open Access Journals (Sweden)

    Laszlo Hars

    2006-03-01

    Full Text Available Hardware and algorithmic optimization techniques are presented to the left-shift, right-shift, and the traditional Euclidean-modular inverse algorithms. Theoretical arguments and extensive simulations determined the resulting expected running time. On many computational platforms these turn out to be the fastest known algorithms for moderate operand lengths. They are based on variants of Euclidean-type extended GCD algorithms. On the considered computational platforms for operand lengths used in cryptography, the fastest presented modular inverse algorithms need about twice the time of modular multiplications, or even less. Consequently, in elliptic curve cryptography delaying modular divisions is slower (affine coordinates are the best and the RSA and ElGamal cryptosystems can be accelerated.

  3. Efficient Big Integer Multiplication and Squaring Algorithms for Cryptographic Applications

    Directory of Open Access Journals (Sweden)

    Shahram Jahani

    2014-01-01

    Full Text Available Public-key cryptosystems are broadly employed to provide security for digital information. Improving the efficiency of public-key cryptosystem through speeding up calculation and using fewer resources are among the main goals of cryptography research. In this paper, we introduce new symbols extracted from binary representation of integers called Big-ones. We present a modified version of the classical multiplication and squaring algorithms based on the Big-ones to improve the efficiency of big integer multiplication and squaring in number theory based cryptosystems. Compared to the adopted classical and Karatsuba multiplication algorithms for squaring, the proposed squaring algorithm is 2 to 3.7 and 7.9 to 2.5 times faster for squaring 32-bit and 8-Kbit numbers, respectively. The proposed multiplication algorithm is also 2.3 to 3.9 and 7 to 2.4 times faster for multiplying 32-bit and 8-Kbit numbers, respectively. The number theory based cryptosystems, which are operating in the range of 1-Kbit to 4-Kbit integers, are directly benefited from the proposed method since multiplication and squaring are the main operations in most of these systems.

  4. A Survey Paper on Deduplication by Using Genetic Algorithm Alongwith Hash-Based Algorithm

    Directory of Open Access Journals (Sweden)

    Miss. J. R. Waykole

    2014-01-01

    Full Text Available In today‟s world, by increasing the volume of information available in digital libraries, most of the system may be affected by the existence of replicas in their warehouses. This is due to the fact that, clean and replica-free warehouse not only allow the retrieval of information which is of higher quality but also lead to more concise data and reduces computational time and resources to process this data. Here, we propose a genetic programming approach along with hash-based similarity i.e, with MD5 and SHA-1 algorithm. This approach removes the replicas data and finds the optimization solution to deduplication of records.

  5. An algorithm for the detection of move repetition without the use of hash-keys

    Directory of Open Access Journals (Sweden)

    Vučković Vladan

    2007-01-01

    Full Text Available This paper addresses the theoretical and practical aspects of an important problem in computer chess programming - the problem of draw detection in cases of position repetition. The standard approach used in the majority of computer chess programs is hash-oriented. This method is sufficient in most cases, as the Zobrist keys are already present due to the systemic positional hashing, so that they need not be computed anew for the purpose of draw detection. The new type of the algorithm that we have developed solves the problem of draw detection in cases when Zobrist keys are not used in the program, i.e. in cases when the memory is not hashed.

  6. Attacks on hash functions and applications

    NARCIS (Netherlands)

    Stevens, Marc Martinus Jacobus

    2012-01-01

    Cryptographic hash functions compute a small fixed-size hash value for any given message. A main application is in digital signatures which require that it must be hard to find collisions, i.e., two different messages that map to the same hash value. In this thesis we provide an analysis of the secu

  7. A Hashing-Based Search Algorithm for Coding Digital Images by Vector Quantization

    Science.gov (United States)

    Chu, Chen-Chau

    1989-11-01

    This paper describes a fast algorithm to compress digital images by vector quantization. Vector quantization relies heavily on searching to build codebooks and to classify blocks of pixels into code indices. The proposed algorithm uses hashing, localized search, and multi-stage search to accelerate the searching process. The average of pixel values in a block is used as the feature for hashing and intermediate screening. Experimental results using monochrome images are presented. This algorithm compares favorably with other methods with regard to processing time, and has comparable or better mean square error measurements than some of them. The major advantages of the proposed algorithm are its speed, good quality of the reconstructed images, and flexibility.

  8. A motif extraction algorithm based on hashing and modulo-4 arithmetic.

    Science.gov (United States)

    Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh

    2008-01-01

    We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs.

  9. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    Science.gov (United States)

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  10. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    Science.gov (United States)

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  11. Implementation of Central Dogma Based Cryptographic Algorithm in Data Warehouse Architecture for Performance Enhancement

    Directory of Open Access Journals (Sweden)

    Rajdeep Chowdhury

    2015-11-01

    Full Text Available Data warehouse is a set of integrated databases deliberated to expand decision-making and problem solving, espousing exceedingly condensed data. Data warehouse happens to be progressively more accepted theme for contemporary researchers with respect to contemporary inclination towards industry and executive purview. The crucial tip of the proposed work is integrated on delivering an enhanced and an exclusive innovative model based on the intention of enhancing security measures, which at times have been found wanting and also ensuring improved accessibility using Hashing modus operandi. An unsullied algorithm was engendered using the concept of protein synthesis, prevalently studied in Genetics, that is, in the field of Biotechnology, wherein three steps are observed, namely; DNA Replication, Translation and Transcription. In the proposed algorithm, the two latter steps, that is, Translation and Transcription have been taken into account and the concept have been used for competent encryption and proficient decryption of data. Central Dogma Model is the name of the explicit model that accounts for and elucidates the course of action for Protein Synthesis using the Codons which compose the RNA and the DNA and are implicated in numerous bio–chemical processes in living organisms. It could be observed that subsequently a dual stratum of encryption and decryption mechanism has been employed for optimal security. The formulation of the immaculate Hashing modus operandi ensure that there would be considerable diminution of access time, keeping in mind the apt retrieval of all indispensable data from the data vaults. The pertinent appliance of the proposed model with enhanced security might be in its significant service in a variety of organizations where accrual of protected data is of extreme magnitude. The variety of organizations might include educational organizations, corporate houses, medical establishments, private establishments and so on

  12. Proposals for iterated hash functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2006-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  13. Proposals for iterated hash functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2006-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  14. Proposals for Iterated Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  15. Proposals for Iterated Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  16. Density Sensitive Hashing

    CERN Document Server

    Lin, Yue; Li, Cheng

    2012-01-01

    Nearest neighbors search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, e.g., Locality Sensitive Hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbors search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called {\\em Density Sensitive Hashing} (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better ...

  17. AMJoin: An Advanced Join Algorithm for Multiple Data Streams Using a Bit-Vector Hash Table

    Science.gov (United States)

    Kwon, Tae-Hyung; Kim, Hyeon-Gyu; Kim, Myoung-Ho; Son, Jin-Hyun

    A multiple stream join is one of the most important but high cost operations in ubiquitous streaming services. In this paper, we propose a newly improved and practical algorithm for joining multiple streams called AMJoin, which improves the multiple join performance by guaranteeing the detection of join failures in constant time. To achieve this goal, we first design a new data structure called BiHT (Bit-vector Hash Table) and present the overall behavior of AMJoin in detail. In addition, we show various experimental results and their analyses for clarifying its efficiency and practicability.

  18. A Fast Attack Algorithm on the MD5 Hash Function

    Institute of Scientific and Technical Information of China (English)

    WANG Zhang-yi; ZHANG Huan-guo; QIN Zhong-ping; MENG Qing-shu

    2006-01-01

    The sufficient conditions for keeping desired differential path of MD5 was discussed. By analyzing the expanding of subtraction difference, differential characters of Boolean functions, and the differential characters of shift rotation, the sufficient conditions for keeping desired differential path could be obtained. From the differential characters of shift rotation, the lacked sufficient conditions were found. Then an algorithm that reduces the number of trials for finding collisions were presented. By restricting search space, search operation can be reduced tc 234 for the first block and 230 for the second block. The whole attack on the MD5 can be accomplished within 2C hours using a PC with 1.6 G CPU.

  19. CRYPTOGRAPHIC STEGANOGRAPHY

    Directory of Open Access Journals (Sweden)

    Vikas Yadav

    2014-08-01

    Full Text Available In the cryptographic steganography system, the message will first be converted into unreadable cipher and then this cipher will be embedded into an image file. Hence this type of system will provide more security by achieving both data encoding as well as data hiding. In this paper we propose an advanced steganocryptic system that combines the features of cryptography and steganography. In this proposed steganocryptic system we will encrypt the message into cipher1 by using Kunal Secure Astro-Encryption and we again encrypt this cipher to cipher2 by using grid cipher technique. Advantage of Kunal Secure Astro-Encryption is that it generates random useless points in between, thus fixed size messages can be generated providing more security compared to other cryptographic algorithms as the number of characters in original message cannot be found from encrypted message without the knowing the black holes. Now we will embed this cipher2 into image file by using visual steganography .In this proposed steganocryptic system we will use modified bit insertion technique to achieve visual steganography. This proposed system will be more secure than cryptography or steganography techniques[digital steganography] alone and also as compared to steganography and cryptography combined systems.

  20. Research of RFID Certification Security Protocol based on Hash Function and DES Algorithm

    Directory of Open Access Journals (Sweden)

    bin Xu

    2013-10-01

    Full Text Available RFID has been more and more attention and application by people, but the existence of security and privacy problems worthy of attention is concern. The certification process analysis of several typical security protocols is based on existing RFID authentication protocol. It proposed an improved bidirectional authentication algorithm. The use of one-way HASH function can solve the security problem of RFID. The protocol has anti-replay, impedance analysis, forgery, and tracking performance, and is suitable for the distributed system. With the development of computer and Internet is widely used in various industries, interaction of high-speed information transfer process. The problem of information security is concern. The paper produce and use all kinds of algorithms based on hash function firstly. Then as information on a solid safety lock, MD5, SHA-1 file verification, encryption, digital signature, PKI building has security full of all kinds of information. Finally, it can effectively prevent the attack, ensuring the authenticity of the information not to be modified or leaks

  1. 基于COS的Hash接口设计与实现%Design and Implementation of Hash Interface Based on COS

    Institute of Scientific and Technical Information of China (English)

    郑斌; 李峥; 王瑞蛟

    2011-01-01

    To solve the problem of the Hash algorithm expansibility based on the Chip Operating System(COS), a flexible Hash interface is designed. The interface which takes the object-oriented thought is made up by Hash algorithm interface and Hash algorithm setting interface. The Hash algorithm interface is set by the Hash algorithm setting interface, which is stored in the EEPROM, to be an instance and has the capability to provide the cryptographic service. The results of experiment show that Hash interface has good expansibility to add other algorithms, and get the purpose to design it.%基于片上操作系统(cos)的Hash函数可扩展性较差.针对该问题,提出一种可重构的Hash接口方法.该方法引入面向对象的概念,由Hash算法接口与Hash算法设置接口2个部分组成,利用存储在EEPROM中的Hash算法设置接口对Hash算法接口进行实例化,使之具备密码服务功能.验证结果表明,该方法具有较强拓展性,能够达到预期设计目标.

  2. Density sensitive hashing.

    Science.gov (United States)

    Jin, Zhongming; Li, Cheng; Lin, Yue; Cai, Deng

    2014-08-01

    Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.

  3. Cryptographic Boolean functions and applications

    CERN Document Server

    Cusick, Thomas W

    2009-01-01

    Boolean functions are the building blocks of symmetric cryptographic systems. Symmetrical cryptographic algorithms are fundamental tools in the design of all types of digital security systems (i.e. communications, financial and e-commerce).Cryptographic Boolean Functions and Applications is a concise reference that shows how Boolean functions are used in cryptography. Currently, practitioners who need to apply Boolean functions in the design of cryptographic algorithms and protocols need to patch together needed information from a variety of resources (books, journal articles and other sources). This book compiles the key essential information in one easy to use, step-by-step reference. Beginning with the basics of the necessary theory the book goes on to examine more technical topics, some of which are at the frontier of current research.-Serves as a complete resource for the successful design or implementation of cryptographic algorithms or protocols using Boolean functions -Provides engineers and scient...

  4. Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Shugo Mikami

    2015-01-01

    Full Text Available Passive radio-frequency identification (RFID tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function.

  5. Image hash algorithm based on chaos theory%一种基于混沌的图像hash算法

    Institute of Scientific and Technical Information of China (English)

    肖潇; 胡春强; 邓绍江

    2011-01-01

    To meet the needs of the Image Authentication, proposed image hash algorithm based on chaos theory. To begin with ,encrypted the original image by Logistic map. After that, difference matrix was modulated and quantified, and then obtained the fixed length of hash sequence. It discussed that the image scaling and JPEG compression were influence on the robustness of the hash sequence. It pointed that robust of the proposed scheme against the above attacks when the threshold t is 0.1. The experimental results indicate that the algorithm has the robust to against the above attacks, then the method is an effective for studying image authentication.%为了实现图像认证,提出了基于混沌理论的图像hash算法.首先将原始图像经过置乱得到加密图像,然后对差值矩阵进行调制、量化,得到固定长度的图像hash序列.算法讨论了图像的缩放和JPEG压缩对图像hash序列的影响,当阈值为0.1时,对上述的攻击方法进行了实验,结果表明,图像对这两种攻击具有一定的鲁棒性.

  6. Collision-resistant hash function based on composition of functions

    CERN Document Server

    Ndoundam, Rene

    2011-01-01

    cryptographic hash function is a deterministic procedure that compresses an arbitrary block of numerical data and returns a fixed-size bit string. There exist many hash functions: MD5, HAVAL, SHA, ... It was reported that these hash functions are not longer secure. Our work is focused in the construction of a new hash function based on composition of functions. The construction used the NP-completeness of Three-dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function.

  7. Final report for LDRD Project 93633 : new hash function for data protection.

    Energy Technology Data Exchange (ETDEWEB)

    Draelos, Timothy John; Dautenhahn, Nathan; Schroeppel, Richard Crabtree; Tolk, Keith Michael; Orman, Hilarie (PurpleStreak, Inc.); Walker, Andrea Mae; Malone, Sean; Lee, Eric; Neumann, William Douglas; Cordwell, William R.; Torgerson, Mark Dolan; Anderson, Eric; Lanzone, Andrew J.; Collins, Michael Joseph; McDonald, Timothy Scott; Caskey, Susan Adele

    2009-03-01

    The security of the widely-used cryptographic hash function SHA1 has been impugned. We have developed two replacement hash functions. The first, SHA1X, is a drop-in replacement for SHA1. The second, SANDstorm, has been submitted as a candidate to the NIST-sponsored SHA3 Hash Function competition.

  8. HCAA:一种哈希冲突过度的动态解决算法%HCAA: A HASH COLLISION EXCESSIVE DYNAMIC SOLUTION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    谢云; 柳厅文; 乔登科; 孙永; 刘金刚

    2011-01-01

    As a data structure for rapid lookup, hash table is widely used in network security applications, such as firewalls etc. However, attackers may use some approaches to launch hash attacks towards these aplications to let them stop responding* so that some malicious data flows can escape from management and control of network security applications. The paper introduces a dynamic hash collision excessive solution algorithm named HCAA (Hash Collision Acceptable Algorithm). When hash collisions are too concentrated, the algorithm handles collision data flows by dynamically applying for hash table and making use of different hash functions to confine collisions within an acceptable scope. Experiment results validate that, compared to existing methods, HCAA can obtain more balanced hash effect with less usage of hash table items, so that faster hash operation can be achieved upon data flows.%HASH表作为一种快速查询的数据结构,在防火墙等网络安全应用中得到了广泛的应用.然而,攻击者可能通过一些手段对这些应用发动HASH攻击使其失去响应,从而使某些恶意的数据流能够逃脱网络安全应用的管理和控制.提出一种动态的哈希冲突过度的解决算法—HCAA( Hash Collision-Acceptable Algorithm)算法,该算法在哈希冲突过于集中时通过动态申请HASH表并使用不同哈希函数来对冲突数据流进行处理,使冲突在可接受的范围内.实验结果表明,与已有方法相比,HCAA算法能在使用更少HASH表项的情况下获得更均衡的HASH效果,从而能对数据流进行更快的HASH操作.

  9. The Grindahl Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen

    2007-01-01

    In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access to the st......In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access...... to the state. We propose two concrete hash functions, Grindahl-256 and Grindahl-512 with claimed security levels with respect to collision, preimage and second preimage attacks of 2^128 and 2^256, respectively. Both proposals have lower memory requirements than other hash functions at comparable speeds...

  10. Implementation of Hash Algorithm in Network Processor%Hash算法在网络处理器中的实现

    Institute of Scientific and Technical Information of China (English)

    付仲满; 张辉; 李苗; 刘涛

    2014-01-01

    提出一种应用于网络处理器的Hash算法,通过建立新型查找表的结构和构造两级Hash函数,能够有效地解决Hash冲突的问题。描述Hash表的软件建立流程和硬件查找过程,在Hash查找的基础上,给出硬件表项的学习过程和老化方法,简化表项的更新操作。针对不同的应用,建立不同类型的Hash表,合理地利用内外部存储资源,兼顾了存储资源和处理速度的平衡。实验结果表明,该算法对各种查找表中不同的表项数目和关键词长度均具有较好的兼容性,成功查找的平均长度为2,减少了存储器的访存次数,其单个微引擎的查找速度高达25 Mb/s,能够满足网络处理器接口处理带宽20 Gb/s的要求。%A novel Hash algorithm is proposed in this paper for network processor application. It resolves Hash collision problem by constructing new look up table and new two-level Hash function. The software processing and hardware lookup flow of Hash table are descripted, and the learning process and ageing machine for entry of table are designed for simplifying the entry updating operation. For different engineering applications,the algorithm sets up different Hash table, which makes the efficience of memory utilization improved and the tradeoff between memory and processing speed optimized. Simulation results show the algorithm works well despite of the number of table entry and the size of keyword. The average length of look up’ s success is 2 and the memory access times is reduced dramaticlly. The look up speed of micro-engine is improved to 25 Mb/s,satisfing the requinrement of 20 Gb/s bandwidth performance of network processor.

  11. Sparse Hashing Tracking.

    Science.gov (United States)

    Zhang, Lihe; Lu, Huchuan; Du, Dandan; Liu, Luning

    2016-02-01

    In this paper, we propose a novel tracking framework based on a sparse and discriminative hashing method. Different from the previous work, we treat object tracking as an approximate nearest neighbor searching process in a binary space. Using the hash functions, the target templates and the candidates can be projected into the Hamming space, facilitating the distance calculation and tracking efficiency. First, we integrate both the inter-class and intra-class information to train multiple hash functions for better classification, while most classifiers in previous tracking methods usually neglect the inter-class correlation, which may cause the inaccuracy. Then, we introduce sparsity into the hash coefficient vectors for dynamic feature selection, which is crucial to select the discriminative and stable features to adapt to visual variations during the tracking process. Extensive experiments on various challenging sequences show that the proposed algorithm performs favorably against the state-of-the-art methods.

  12. A new multivariate Hash algorithm based on improved Merkle-Damg(a)rd construction%改进M-D结构的二次多变量Hash算法

    Institute of Scientific and Technical Information of China (English)

    王尚平; 任姣霞; 张亚玲; 韩照国

    2011-01-01

    As there are some security defects in traditional Hash algorithms, a new Hash algorithm was proposed.This algorithm's security was based on the difficulty of solving large systems of quadratic multivariate polynomial equations over a finite field. An improved Merkle-Damg(a)rd construction was proposed, and Nested MAC's idea was used in the new Hash algorithm; a counter was also added in the construction to resist some attacks to the MerkleDamg(a)rd construction. The output size of the new Hash algorithm is adjustable, aiming to provide different levels of security. The new Hash algorithm is secure against common attacks, and it exhibits a satisfactory avalanche effect.It also has some advantages in memory requirements and running speed compared with previous multivariate Hash algorithms.%针对传统Hash算法有安全缺陷的问题,利用有限域上多变量二次方程组求解(MQ)问题的困难性,设计了一种新的基于有限域上多变量二次多项式的Hash算法.新算法给出了一个改进的M-D结构,采用了NMAC(nested MAC)的 思想,并加入了计数器,旨在抵抗一些针对传统M-D结构的攻击.新算法具有可调的输出参数,可以适应不同程度的安全性需求.新算法可以抵抗常见的攻击,且具有良好的雪崩效应.新算法相对于以往的多变量Hash算法,在内存需求上和运行速度上都有一定的优势.

  13. Fast and Efficient Design of a PCA-Based Hash Function

    Directory of Open Access Journals (Sweden)

    Alaa Eddine Belfedhal

    2015-05-01

    Full Text Available We propose a simple and efficient hash function based on programmable elementary cellular automata. Cryptographic hash functions are important building blocks for many cryptographic protocols such as authentication and integrity verification. They have recently brought an exceptional research interest, especially after the increasing number of attacks against the widely used functions as MD5, SHA-1 and RIPEMD, causing a crucial need to consider new hash functions design and conception strategies. The proposed hash function is built using elementary cellular automata that are very suitable for cryptographic applications, due to their chaotic and complex behavior derived from simple rules interaction. The function is evaluated using several statistical tests, while obtained results demonstrate very admissible cryptographic proprieties such as confusion, diffusion capability and high sensitivity to input changes. Furthermore, the hashing scheme can be easily implemented through software or hardware, and provides very competitive running performances.

  14. Cryptanalysis of the LAKE Hash Family

    DEFF Research Database (Denmark)

    Biryukov, Alex; Gauravaram, Praveen; Guo, Jian

    2009-01-01

    We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show three different collision and near-collision attacks on the compression function. The first attack...

  15. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

    Directory of Open Access Journals (Sweden)

    Maryam Kamarzarrin

    2015-05-01

    Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

  16. Teaching design discussion of trivium cryptographic algorithm%Trivium密码算法的教学设计探讨

    Institute of Scientific and Technical Information of China (English)

    韦永壮; 张润莲

    2012-01-01

      Trivium cipher algorithm is ultimately one of the selected algorithm of the European Stream Cipher Scheme ( eSTREAM ). Because of its following characteristic such as simple, graceful, fast software and hardware, security and so on, in academia and industry,it was widely payed attention to. According to existed actual problems of Trivium cryptographic algorithm in the teaching and the characteristics of the students in senior information profession, the structure of Trivium algorithm analyzed, the teaching design were put forward from the view of Boolean function, valuable teaching ideas were provided for peer.%  Trivium密码算法是欧洲流密码计划(eSTREAM)的最终入选算法之一。由于其结构简洁、优美、软硬件实现快速、安全性好等特点,倍受学术界和工业界的广泛关注。针对Trivium密码算法实际教学中存在的问题及信息专业高年级学生的特点,分析了Trivium算法的结构,提出从布尔函数的角度来进行教学设计,包括算法部件分析、加解密过程、性能及安全性等环节,为同行提供有价值的教学思路。

  17. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke;

    2014-01-01

    , can be easily made cache-oblivious but it only achieves t q =1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547–558, 2011) achieves t q =1+1/2 Ω(b), thus matching the cache-aware bound, if the following two......The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1......+1/2 Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works...

  18. Algorithm for finding partitionings of hard variants of boolean satisfiability problem with application to inversion of some cryptographic functions.

    Science.gov (United States)

    Semenov, Alexander; Zaikin, Oleg

    2016-01-01

    In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.

  19. Model-based vision using geometric hashing

    Science.gov (United States)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  20. Implementation analysis of RC5 algorithm on Preneel-Govaerts-Vandewalle (PGV) hashing schemes using length extension attack

    Science.gov (United States)

    Siswantyo, Sepha; Susanti, Bety Hayat

    2016-02-01

    Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.

  1. Uniform Hashing in Constant Time and Optimal Space

    DEFF Research Database (Denmark)

    Pagh, Anna Östlin; Pagh, Rasmus

    2008-01-01

    Many algorithms and data structures employing hashing have been analyzed under the uniform hashing assumption, i.e., the assumption that hash functions behave like truly random functions. Starting with the discovery of universal hash functions, many researchers have studied to what extent this th...

  2. A Quick Algorithm for Value Reduction Based on Hash Algorithm%一种基于Hash的快速值约简方法

    Institute of Scientific and Technical Information of China (English)

    张清华; 幸禹可

    2011-01-01

    本文在研究粗糙集、决策树与粒计算的基础上,结合Hash算法快速、高效的特点,提出了一种基于Hash的快速值约简方法.该方法在处理信息系统过程中,能够快速划分等价类,并计算出正区域;在基于粗糙集理论针对每一个属性进行属性约简和值约简的过程中,利用Hash方法能够对数据压缩的特点,实现快速高效的规则提取.通过仿真实验显示,与一般的值约简方法相比,本方法在时间复杂性上具有优势.%A new quick value reduction method is proposed based on rough set theory,decision tree theory and granular computing theory. Firstly,the characteristic of data is analyzed by rough set theory,meanwhile, using Hash algorithms partition composed of all equivalence classes is obtained and the positive region is calculated,then,value reduction can be completed quickly due to the advantage of Hash algorithm. Compared with traditional algorithms,analysis and simulation results show the proposed algorithm has lower time complexity.

  3. Pre-Mrna Introns as a Model for Cryptographic Algorithm:. Theory and Experiments

    Science.gov (United States)

    Regoli, Massimo

    2010-01-01

    The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. In particular the RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions", are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by Biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behaviour in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.

  4. 二次Hash+二分最大匹配快速分词算法%Secondary Hash+ Binsearch Maximal Match Algorithm for Chinese Word Segmentation

    Institute of Scientific and Technical Information of China (English)

    杨安生

    2009-01-01

    通过对已有的分词算法尤其是快速分词算法的分析,提出了一种新的分词词典结构.并据此提出了二次Hash+二分最大匹配快速分词算法.该算法具有较快的分词速度.

  5. The Grindahl Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen

    2007-01-01

    In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access to the st...

  6. Comparison and Analysis of Hash Algorithm for Multi-process Load Balancing%面向多进程负载均衡的Hash算法比较与分析

    Institute of Scientific and Technical Information of China (English)

    张莹; 吴和生

    2014-01-01

    Hash算法在高性能多进程负载均衡中起到关键作用,但目前面向多进程负载均衡的Hash算法研究主要集中在Hash算法设计和领域应用方面,较少有文献对现有的Hash算法性能进行分析比较。为此,总结面向多进程负载均衡的Hash算法应具有的特征,并据此筛选出5种适用于多进程负载均衡的主流Hash算法,从分配均衡性和耗时等方面进行理论分析和实验评估,为多进程负载均衡中Hash算法的选择与使用提供依据。分析结果表明, Toeplitz Hash算法较适合用于多进程的负载均衡。%Hash algorithm plays a key role in high performance multi-process load balancing. The study of Hash algorithm for multi-process load balancing is mainly concentrated on the design and application of Hash algorithm,yet analysis and comparative study for the performance of the existing Hash algorithm are fewer. So this paper summarizes the common features that Hash algorithm for multi-process load balancing should have, and screens five major Hash algorithms applied in multi-process load balancing. Theoretical analysis and experimental evaluation about balanced allocation and time-consuming of Hash algorithm provides a foundation for selecting Hash algorithm for multi-process load balancing,and shows that Toeplitz Hash is the best one.

  7. Prevention of Co-operative Black Hole attack in Manet on DSR protocol using Cryptographic Algorithm

    Directory of Open Access Journals (Sweden)

    G.Vennila

    2014-10-01

    Full Text Available The Mobile ad-hoc network (MANET is a collection of wireless mobile node in which each node can communicate with other node without use of predefined infrastructure. Currently, a lot of efficient protocols have been proposed for MANET. All of these efficient Routing protocols are depends only conviction and supportive environment. Conversely, the networks are more vulnerable to various kinds of routing attacks with the presence of malicious nodes. Black hole attack is one of network layer attack. In this attack, A malicious node make use of routing protocol to advertise itself that has a shortest path to reach destination, drops at the cost of original routing packets. In our work, the proposed algorithm is used to secure the DSR protocol. This will help to improve the performance of Mobile Ad hoc network due to the attack. There are several prevention mechanisms to eliminate the Black Hole attack in MANET. The aim of the paper is to provide better prevention of Co-operative Black hole attack in MANET and how it affects the performance metrics in terms of throughput and delay of the network by comparing the network performance with and without black hole nodes.

  8. 多格式音频感知哈希算法%Perceptual Hashing Algorithm for Multi-Format Audio

    Institute of Scientific and Technical Information of China (English)

    张秋余; 省鹏飞; 黄羿博; 董瑞洪; 杨仲平

    2016-01-01

    提出一种基于双树复小波变换的多格式音频感知哈希算法,解决了现有音频认证算法音频格式单一、算法不通用、效率低的问题.首先对预处理后的音频信号进行全局双树复小波变换,获得信号的实小波和复小波系数,对它们分别分帧,帧数相同;对实小波系数计算每帧信号Teager能量算子的模值,作为实小波系数的帧间特征,接着对每帧信号再分帧,提取再分帧帧信号的短时能量作为实小波系数的帧内特征;对复小波系数求取每帧信号的熵值作为复小波系数的帧间特征;最后对上述特征分别进行哈希构造,生成感知哈希序列.实验结果表明,该算法对5种不同格式的音频都具有强鲁棒性,且区分性好,效率高,并能实现小范围篡改检测.%A novel multi-format audio perceptual hashing algorithm based on dual tree complex wavelet transform ( DT-CWT ) was proposed. It solves the problems of the existing audio authentication algo-rithms, including that audio files are kept in a single format, and algorithms are not generic and low effi-ciency. The proposed algorithm first applies the global DT-CWT to the audio signal after pre-processing conducts to obtain the real and complex wavelet coefficients. Next, the coefficients are partitioned in some frames respectively, and the frame numbers are same. For the real wavelet coefficients, the module values of teager energy operator in every frame are computed to serve as its inter-frame feature. And then short-time energy of the new signal, which is generated to frame the frame signal, is computed to serve as its intra-frame feature. For the complex wavelet coefficients, entropy values are obtained in every frame to serve as its inter-frame feature. Finally, the above features are to conduct a hashing structure process to produce the perceptual hashing sequence. Experiments show that the proposed algorithm has good robust-ness and discrimination for audio

  9. Hashing on nonlinear manifolds.

    Science.gov (United States)

    Shen, Fumin; Shen, Chunhua; Shi, Qinfeng; van den Hengel, Anton; Tang, Zhenmin; Shen, Heng Tao

    2015-06-01

    Learning-based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes preserving the Euclidean similarity in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexities of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this paper, how to learn compact binary embeddings on their intrinsic manifolds is considered. In order to address the above-mentioned difficulties, an efficient, inductive solution to the out-of-sample data problem, and a process by which nonparametric manifold learning may be used as the basis of a hashing method are proposed. The proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. It is particularly shown that hashing on the basis of t-distributed stochastic neighbor embedding outperforms state-of-the-art hashing methods on large-scale benchmark data sets, and is very effective for image classification with very short code lengths. It is shown that the proposed framework can be further improved, for example, by minimizing the quantization error with learned orthogonal rotations without much computation overhead. In addition, a supervised inductive manifold hashing framework is developed by incorporating the label information, which is shown to greatly advance the semantic retrieval performance.

  10. DEVELOPMENT AND IMPLEMENTATION OF HASH FUNCTION FOR GENERATING HASHED MESSAGE

    Directory of Open Access Journals (Sweden)

    Amir Ghaeedi

    2016-09-01

    Full Text Available Steganography is a method of sending confidential information in a way that the existence of the channel in this communication remains secret. A collaborative approach between steganography and digital signature provides a high secure hidden data. Unfortunately, there are wide varieties of attacks that affect the quality of image steganography. Two issues that required to be addressed are large size of the ciphered data in digital signature and high bandwidth. The aim of the research is to propose a new method for producing a dynamic hashed message algorithm in digital signature and then embedded into image for enhancing robustness of image steganography with reduced bandwidth. A digital signature with smaller hash size than other hash algorithms was developed for authentication purposes. A hash function is used in the digital signature generation. The encoder function encoded the hashed message to generate the digital signature and then embedded into an image as a stego-image. In enhancing the robustness of the digital signature, we compressed or encoded it or performed both operations before embedding the data into the image. This encryption algorithm is also computationally efficient whereby for messages with the sizes less than 1600 bytes, the hashed file reduced the original file up to 8.51%.

  11. Hash Dijkstra Algorithm for Approximate Minimal Spanning Tree%近似最小树的哈希Dijkstra算法

    Institute of Scientific and Technical Information of China (English)

    李玉鑑; 李厚君

    2011-01-01

    为了解决Dijkstra(DK)算法对大规模数据构造最小树时效率不高的问题,结合局部敏感哈希映射(LSH),针对欧氏空间中的样本,提出了一种近似最小树的快速生成算法,即LSHDK算法.该算法通过减少查找近邻点的计算量提高运行速度.计算实验结果表明,当数据规模大于50000个点时,LSHDK算法比DK算法速度更快且所计算的近似最小树在维数较低时误差非常小(0.00—0.05%),在维数较高时误差通常为0.1%~3.0%.%In order to overcome the low efficiency of Dijkstra (DK) algorithm in constructing Minimal Spanning Trees (MST) for large-scale datasets, this paper uses Locality Sensitive Hashing (LSH) to design a fast approximate algorithm, namely, LSHDK algorithm, to build MST in Euclidean space. The LSHDK algorithm can achieve a faster speed with small error by reducing computations in search of nearest points. Computational experiments show that it runs faster than the DK algorithm on datasets of more than 50 000 points, while the resulting approximate MST has an small error which is very small (0.00 - 0.05% ) in low dimension, and generally between 0. 1% -3.0% in high dimension.

  12. Cuckoo Hashing with Pages

    CERN Document Server

    Dietzfelbinger, Martin; Rink, Michael

    2011-01-01

    Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.

  13. A Non-Collision Hash Trie-Tree Based FastIP Classification Algorithm

    Institute of Scientific and Technical Information of China (English)

    徐恪; 吴建平; 喻中超; 徐明伟

    2002-01-01

    With the development of network applications, routers must support such functions as firewalls, provision of QoS, traffic billing, etc. All these functions need the classification of IP packets, according to how different the packets are processed subsequently, which is determined. In this article, a novel IP classification algorithm is proposed based on the Grid of Tries algorithm. The new algorithm not only eliminates original limitations in the case of multiple fields but also shows better performance in regard to both time and space. It has better overall performance than many other algorithms.

  14. Discriminative Hash Tracking With Group Sparsity.

    Science.gov (United States)

    Du, Dandan; Zhang, Lihe; Lu, Huchuan; Mei, Xue; Li, Xiaoli

    2016-08-01

    In this paper, we propose a novel tracking framework based on discriminative supervised hashing algorithm. Different from previous methods, we treat tracking as a problem of object matching in a binary space. Using the hash functions, all target templates and candidates are mapped into compact binary codes, with which the target matching is conducted effectively. To be specific, we make full use of the label information to assign a compact and discriminative binary code for each sample. And to deal with out-of-sample case, multiple hash functions are trained to describe the learned binary codes, and group sparsity is introduced to the hash projection matrix to select the representative and discriminative features dynamically, which is crucial for the tracker to adapt to target appearance variations. The whole training problem is formulated as an optimization function where the hash codes and hash function are learned jointly. Extensive experiments on various challenging image sequences demonstrate the effectiveness and robustness of the proposed tracker.

  15. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  16. The Power of Simple Tabulation Hashing

    CERN Document Server

    Patrascu, Mihai

    2010-01-01

    Randomized algorithms are often enjoyed for their simplicity, but the hash functions used to yield the desired theoretical guarantees are often neither simple nor practical. Here we show that the simplest possible tabulation hashing provides unexpectedly strong guarantees. The scheme itself dates back to Carter and Wegman (STOC'77). Keys are viewed as consisting of c characters. We initialize c tables T_1, ..., T_c mapping characters to random hash codes. A key x=(x_1, ..., x_q) is hashed to T_1[x_1] xor ... xor T_c[x_c]. While this scheme is not even 4-independent, we show that it provides many of the guarantees that are normally obtained via higher independence, e.g., Chernoff-type concentration, min-wise hashing for estimating set intersection, and cuckoo hashing.

  17. 基于一致性Hash算法的分布式缓存数据冗余%Redundancy of Distributed Cache Data Based on Consistent Hash Algorithm

    Institute of Scientific and Technical Information of China (English)

    李宁

    2016-01-01

    为了优化大型分布式网站中的数据缓存机制 ,提出基于一致性 Hash算法的缓存数据冗余机制.分析不同散列函数性能 ,使数据能均匀分布在Hash环上不同节点 ,使用二分法在主从 Hash环上分别进行存取缓存数据.本地测试及结果分析表明 ,该冗余机制明显优于直接读库和单机缓存 ,在分布式系统中能有效降低冗余操作带来的性能损耗 ,提高了网站的健壮性和稳定性 ,为高并发、分布式缓存系统设计提供了一个新的思路.%In order to optimize the data caching mechanism in large distributed sites ,this paper propose the redundancy of cache data based on Consistent Hash Algorithm ,analyze the performance of different hash functions to distribute the data on the nodes of hash ring evenly .Set and get data in Master-Slave hash rings respectively in use of dichotomy .Through local testing and analysis ,we found that the redundancy mechanism is significantly better than reading the database direct-ly or local cache ,the performance loss of backup operations can be effectively reduced in distributed system .Therefore this mechanism can improve the robustness and stability of the site ,and provide new ideas to the design of high concurren-cy ,distributed cache system .

  18. Robust hashing with local models for approximate similarity search.

    Science.gov (United States)

    Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang

    2014-07-01

    Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.

  19. 基于耦合动态整数帐篷映象格子模型的轻量级 Hash 函数%A Lightweight Hash Function Based on Coupled Dynamic Integral Tent Map Lattice Model

    Institute of Scientific and Technical Information of China (English)

    张啸; 刘建东; 商凯; 胡辉辉

    2016-01-01

    基于耦合动态整数帐篷映象格子模型构造了一种适用于 RFID 认证系统的轻量级 Hash 函数。该算法具有输出任意字节长度散列值的能力,定义在整数集上,克服了目前主流混沌密码算法需要进行浮点运算的缺陷,适用于硬件资源有限的系统。实验及仿真分析结果表明,该 Hash 函数具有较高的安全性,能够满足 RFID 认证系统的安全需求。%We design a lightweight Hash function based on coupled dynamic integral tent map lattice model,which applies to the authentication process between tags and reader in RFID system.Any length in byte of Hash value can be put out through the algorithm.What is more, it is suitable for the limited hardware resource system,because the domain of definition in algorithm is the integer set.Therefore,it overcomes the shortcoming which needs floating operation in most main-stream chaotic cryptographic algorithms.Experiment and simulation analysis show that the proposed Hash function has a high level of security,which satisfies the security requirement of the authentication system in RFID.

  20. Research of incremental extraction based on MD5 and HASH algorithm%融入MD5的HASH线性获取增量算法研究

    Institute of Scientific and Technical Information of China (English)

    郭亮; 杨金民

    2014-01-01

    To achieve rapid incremental extraction of database, an algorithm which is blended MD5 in HASH linear scan-ning to obtain increment is put forward based on analyzing the traditional incremental extraction. Each record in database can be seen as a character string and it can be generated into hash table as duplicate record, which is explored in hash table through traditional record to obtain increment and decrease frequency of comparison. Meanwhile, the fingerprint of each record can be generated with using MD5 algorithm, which reduces the length of character string in every HASH algorithm and comparison and improves efficiency. This algorithm is applicably tested in ORACLE database and the result shows that it is improved on calculative efficiency at a large extent compared with traditional algorithm.%为了实现数据库中的快速增量提取,在剖析传统的增量提取方法上,提出了一种融入MD5的HASH线性扫描来获取增量的算法。数据库中的每条记录都可视为一个字符串,利用HASH算法生成备份记录的散列表,通过原始记录去散列表中探测来达到线性扫描就能获取增量的目的,减少了比对次数;同时利用MD5算法生成每条记录的“指纹”,降低了每次HASH运算和比对的字符串长度,提高了效率。对所提出算法在ORACLE数据库上进行了应用测试,结果表明该算法效率较传统方法有很大提高。

  1. Robust video hashing via multilinear subspace projections.

    Science.gov (United States)

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  2. Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.

    Science.gov (United States)

    Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong

    2016-02-01

    Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both

  3. Hashing algorithms and data structures for rapid searches of fingerprint vectors.

    Science.gov (United States)

    Nasr, Ramzi; Hirschberg, Daniel S; Baldi, Pierre

    2010-08-23

    In many large chemoinformatics database systems, molecules are represented by long binary fingerprint vectors whose components record the presence or absence of particular functional groups or combinatorial features. To speed up database searches, we propose to add to each fingerprint a short signature integer vector of length M. For a given fingerprint, the i component of the signature vector counts the number of 1-bits in the fingerprint that fall on components congruent to i modulo M. Given two signatures, we show how one can rapidly compute a bound on the Jaccard-Tanimoto similarity measure of the two corresponding fingerprints, using the intersection bound. Thus, these signatures allow one to significantly prune the search space by discarding molecules associated with unfavorable bounds. Analytical methods are developed to predict the resulting amount of pruning as a function of M. Data structures combining different values of M are also developed together with methods for predicting the optimal values of M for a given implementation. Simulations using a particular implementation show that the proposed approach leads to a 1 order of magnitude speedup over a linear search and a 3-fold speedup over a previous implementation. All theoretical results and predictions are corroborated by large-scale simulations using molecules from the ChemDB. Several possible algorithmic extensions are discussed.

  4. Image Hashing algorithm based on stacked autoencoder%基于栈式自动编码的图像哈希算法

    Institute of Scientific and Technical Information of China (English)

    张春雨; 韩立新; 徐守晶

    2016-01-01

    随着网络图像的快速发展,在大型图像检索系统中哈希算法成为近似最近邻查询算法的研究重点。本文提出一种基于深度模型的哈希算法—深度哈希。通过深度卷积神经网络提取的图像高维全局特征,用栈式自动编码器对特征进行无监督学习得到二进制哈希编码,利用图像标签语义相似性对栈式自动编码器的参数进行微调,最后用汉明距离来计算图像的相似性。本文提出的深度哈希在图像检索中取得了较好的结果。%With the rapid development of network in the large image ,image hashing algorithm has attracted interests as an approach of approximate nearest neighbor algorithm in the image retrieval system .In this paper ,we proposed the deep hash which based on deep learning models .The high dimensional global are extracted by deep convolutional neural network ,then using stack autoencoder to get the parameters of the models by unsupervised learning to get the binary hash code .Finally using the hamming distance to compute the similarity of the images .The deephash proves the better results in image retrieval .

  5. Hashing, Randomness and Dictionaries

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    time and memory space. To some extent we also consider lower bounds, i.e., we attempt to show limitations on how efficient algorithms are possible. A central theme in the thesis is randomness. Randomized algorithms play an important role, in particular through the key technique of hashing. Additionally...... algorithms community. We work (almost) exclusively with a model, a mathematical object that is meant to capture essential aspects of a real computer. The main model considered here (and in most of the literature on dictionaries) is a unit cost RAM with a word size that allows a set element to be stored...... in one word. We consider several variants of the dictionary problem, as well as some related problems. The problems are studied mainly from an upper bound perspective, i.e., we try to come up with algorithms that are as efficient as possible with respect to various computing resources, mainly computation...

  6. 基于Hash和CAM的IPv6路由查找算法%IPv6 Routing Lookup Algorithm Based on Hash and CAM

    Institute of Scientific and Technical Information of China (English)

    王瑞青; 杜慧敏; 王亚刚

    2012-01-01

    分析实际网络中的IPv6前缀分布规律与增长趋势,提出一种基于Hash和内容可寻址存储器(CAM)的IPv6路由查找算法.将长度能被8整除的前缀存储在8个Hash表中,发生Hash冲突的前缀存储在CAM中,长度不能被8整除的前缀按照一定的组织方式存储在随机存取存储器中.分析结果表明,该算法具有较高的存储利用率、查找速率及更新速率,并且易于扩展和硬件实现.%This paper presents an IPv6 lookup algorithm based on Hash and Content Addressable Memory(CAM) by analyzing the prefix length distribution of routing table and the growth trend of routing table entries. The prefixes whose length can be divided by 8 are stored in 8 Hash tables, and the remaining prefixes are stored into expanded Random Access Memory(RAM). Analysis result shows that the algorithm has high efficient storage utilization, searching rate and updating rate. It is easy to be scalded in hardware.

  7. Chaotic keyed hash function based on feedforward feedback nonlinear digital filter

    Science.gov (United States)

    Zhang, Jiashu; Wang, Xiaomin; Zhang, Wenfang

    2007-03-01

    In this Letter, we firstly construct an n-dimensional chaotic dynamic system named feedforward feedback nonlinear filter (FFNF), and then propose a novel chaotic keyed hash algorithm using FFNF. In hashing process, the original message is modulated into FFNF's chaotic trajectory by chaotic shift keying (CSK) mode, and the final hash value is obtained by the coarse-graining quantization of chaotic trajectory. To expedite the avalanche effect of hash algorithm, a cipher block chaining (CBC) mode is introduced. Theoretic analysis and numerical simulations show that the proposed hash algorithm satisfies the requirement of keyed hash function, and it is easy to implement by the filter structure.

  8. Hash functions and triangular mesh reconstruction*1

    Science.gov (United States)

    Hrádek, Jan; Kuchař, Martin; Skala, Václav

    2003-07-01

    Some applications use data formats (e.g. STL file format), where a set of triangles is used to represent the surface of a 3D object and it is necessary to reconstruct the triangular mesh with adjacency information. It is a lengthy process for large data sets as the time complexity of this process is O( N log N), where N is number of triangles. Triangular mesh reconstruction is a general problem and relevant algorithms can be used in GIS and DTM systems as well as in CAD/CAM systems. Many algorithms rely on space subdivision techniques while hash functions offer a more effective solution to the reconstruction problem. Hash data structures are widely used throughout the field of computer science. The hash table can be used to speed up the process of triangular mesh reconstruction but the speed strongly depends on hash function properties. Nevertheless the design or selection of the hash function for data sets with unknown properties is a serious problem. This paper describes a new hash function, presents the properties obtained for large data sets, and discusses validity of the reconstructed surface. Experimental results proved theoretical considerations and advantages of hash function use for mesh reconstruction.

  9. Protein sequence classification using feature hashing.

    Science.gov (United States)

    Caragea, Cornelia; Silvescu, Adrian; Mitra, Prasenjit

    2012-06-21

    Recent advances in next-generation sequencing technologies have resulted in an exponential increase in the rate at which protein sequence data are being acquired. The k-gram feature representation, commonly used for protein sequence classification, usually results in prohibitively high dimensional input spaces, for large values of k. Applying data mining algorithms to these input spaces may be intractable due to the large number of dimensions. Hence, using dimensionality reduction techniques can be crucial for the performance and the complexity of the learning algorithms. In this paper, we study the applicability of feature hashing to protein sequence classification, where the original high-dimensional space is "reduced" by hashing the features into a low-dimensional space, using a hash function, i.e., by mapping features into hash keys, where multiple features can be mapped (at random) to the same hash key, and "aggregating" their counts. We compare feature hashing with the "bag of k-grams" approach. Our results show that feature hashing is an effective approach to reducing dimensionality on protein sequence classification tasks.

  10. Large-Scale Unsupervised Hashing with Shared Structure Learning.

    Science.gov (United States)

    Liu, Xianglong; Mu, Yadong; Zhang, Danchen; Lang, Bo; Li, Xuelong

    2015-09-01

    Hashing methods are effective in generating compact binary signatures for images and videos. This paper addresses an important open issue in the literature, i.e., how to learn compact hash codes by enhancing the complementarity among different hash functions. Most of prior studies solve this problem either by adopting time-consuming sequential learning algorithms or by generating the hash functions which are subject to some deliberately-designed constraints (e.g., enforcing hash functions orthogonal to one another). We analyze the drawbacks of past works and propose a new solution to this problem. Our idea is to decompose the feature space into a subspace shared by all hash functions and its complementary subspace. On one hand, the shared subspace, corresponding to the common structure across different hash functions, conveys most relevant information for the hashing task. Similar to data de-noising, irrelevant information is explicitly suppressed during hash function generation. On the other hand, in case that the complementary subspace also contains useful information for specific hash functions, the final form of our proposed hashing scheme is a compromise between these two kinds of subspaces. To make hash functions not only preserve the local neighborhood structure but also capture the global cluster distribution of the whole data, an objective function incorporating spectral embedding loss, binary quantization loss, and shared subspace contribution is introduced to guide the hash function learning. We propose an efficient alternating optimization method to simultaneously learn both the shared structure and the hash functions. Experimental results on three well-known benchmarks CIFAR-10, NUS-WIDE, and a-TRECVID demonstrate that our approach significantly outperforms state-of-the-art hashing methods.

  11. Dakota- Hashing from a Combination of Modular Arithmetic and Symmetric Cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars; Thomsen, Søren S

    2008-01-01

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum......In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques...

  12. Dakota – Hashing from a Combination of Modular Arithmetic and Symmetric Cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum......In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques...

  13. Dakota - hashing from a combination of modular arithmetic and symmetric cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum......In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques...

  14. Perceptual Audio Hashing Functions

    Directory of Open Access Journals (Sweden)

    Emin Anarım

    2005-07-01

    Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  15. Optimal hash arrangement of tentacles in jellyfish

    Science.gov (United States)

    Okabe, Takuya; Yoshimura, Jin

    2016-06-01

    At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.

  16. Physical cryptographic verification of nuclear warheads

    Science.gov (United States)

    Kemp, R. Scott; Danagoulian, Areg; Macdonald, Ruaridh R.; Vavrek, Jayson R.

    2016-08-01

    How does one prove a claim about a highly sensitive object such as a nuclear weapon without revealing information about the object? This paradox has challenged nuclear arms control for more than five decades. We present a mechanism in the form of an interactive proof system that can validate the structure and composition of an object, such as a nuclear warhead, to arbitrary precision without revealing either its structure or composition. We introduce a tomographic method that simultaneously resolves both the geometric and isotopic makeup of an object. We also introduce a method of protecting information using a provably secure cryptographic hash that does not rely on electronics or software. These techniques, when combined with a suitable protocol, constitute an interactive proof system that could reject hoax items and clear authentic warheads with excellent sensitivity in reasonably short measurement times.

  17. Cryptographic primitives based on cellular transformations

    Directory of Open Access Journals (Sweden)

    B.V. Izotov

    2003-11-01

    Full Text Available Design of cryptographic primitives based on the concept of cellular automata (CA is likely to be a promising trend in cryptography. In this paper, the improved method performing data transformations by using invertible cyclic CAs (CCA is considered. Besides, the cellular operations (CO as a novel CAs application in the block ciphers are introduced. Proposed CCAs and COs, integrated under the name of cellular transformations (CT, suit well to be used in cryptographic algorithms oriented to fast software and cheap hardware implementation.

  18. HAMA-Based Semi-Supervised Hashing Algorithm%基于HAMA的半监督哈希方法

    Institute of Scientific and Technical Information of China (English)

    刘扬; 朱明

    2014-01-01

    In the massive data retrieval applications, hashing-based approximate nearest(ANN) search has become popular due to its computational and memory efficiency for online search. Semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. But the training of hashing function of this framework is so slow due to the large-scale complex training process. HAMA is a Hadoop top-level parallel framework based on Bulk Synchronous Parallel mode (BSP). In this paper, we analyze calculation of adjusted covariance matrix in the training process of SSH, split it into two parts:unsupervised data variance part and supervised pairwise labeled data part, and explore its parallelization. And experiments show the performance and scalability over general commercial hardware and network environment.%在海量数据检索应用中,基于哈希算法的最近邻搜索算法有着很高的计算和内存效率。而半监督哈希算法,结合了无监督哈希算法的正规化信息以及监督算法跨越语义鸿沟的优点,从而取得了良好的结果。但其线下的哈希函数训练过程则非常之缓慢,要对全部数据集进行复杂的训练过程。 HAMA是在Hadoop平台基础上,按照分布式计算BSP模型构建的并行计算框架。本文尝试在HAMA框架基础上,将半监督哈希算法的训练过程中的调整相关矩阵计算过程分解为无监督的相关矩阵部分与监督性的调整部分,分别进行并行计算处理。这使得使得其可以水平扩展在较大规模的商业计算集群上,使得其可以应用于实际应用。实验表明,这种分布式算法,有效提高算法的性能,并且可以进一步应用在大规模的计算集群上。

  19. Appliance of Neuron Networks in Cryptographic Systems

    Directory of Open Access Journals (Sweden)

    Mohammed Al-Maitah

    2014-01-01

    Full Text Available This study is dedicated to the examination of a problem of postquantum encryption algorithms which are connected with a potential crisis in modern cryptography that is caused by appearance of quantum computers. General problem formulation is given as well as an example of danger from the quantum algorithms against classical cryptosystems. Existing postquantum systems are analyzed and the complication of their realization and cryptosecurity are estimated. Among the others algorithms on the basis of neural networks are chosen as a starting point. The study demonstrates neuro cryptographic protocol based on a three-level neural network of the direct propagation. There was evaluated it’s cryptosecurity and analyzed three types of this algorithm attack to show the reality of the hypothesis that neuro cryptography is currently one of the most promising post quantum cryptographic systems.

  20. New Cryptosystem Using Multiple Cryptographic Assumptions

    Directory of Open Access Journals (Sweden)

    E. S. Ismail

    2011-01-01

    Full Text Available Problem statement: A cryptosystem is a way for a sender and a receiver to communicate digitally by which the sender can send receiver any confidential or private message by first encrypting it using the receiver’s public key. Upon receiving the encrypted message, the receiver can confirm the originality of the message’s contents using his own secret key. Up to now, most of the existing cryptosystems were developed based on a single cryptographic assumption like factoring, discrete logarithms, quadratic residue or elliptic curve discrete logarithm. Although these schemes remain secure today, one day in a near future they may be broken if one finds a polynomial algorithm that can efficiently solve the underlying cryptographic assumption. Approach: By this motivation, we designed a new cryptosystem based on two cryptographic assumptions; quadratic residue and discrete logarithms. We integrated these two assumptions in our encrypting and decrypting equations so that the former depends on one public key whereas the latter depends on one corresponding secret key and two secret numbers. Each of public and secret keys in our scheme determines the assumptions we use. Results: The newly developed cryptosystem is shown secure against the three common considering algebraic attacks using a heuristic security technique. The efficiency performance of our scheme requires 2Texp+2Tmul +Thash time complexity for encryption and Texp+2Tmul+Tsrt time complexity for decryption and this magnitude of complexity is considered minimal for multiple cryptographic assumptions-like cryptosystems. Conclusion: The new cryptosystem based on multiple cryptographic assumptions offers a greater security level than that schemes based on a single cryptographic assumption. The adversary has to solve the two assumptions simultaneously to recover the original message from the received corresponding encrypted message but this is very unlikely to happen.

  1. Security Analysis of Randomize-Hash-then-Sign Digital Signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2012-01-01

    At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar...... a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106.In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied...... functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online...

  2. 结合Harris角点检测和不变质心的稳健图像Hash算法%Robust image Hash algorithm based on Harris corners and invariant centroid

    Institute of Scientific and Technical Information of China (English)

    崔得龙; 左敬龙; 彭志平

    2011-01-01

    提出了应用Harris角点检测和不变质心的图像Hash算法.算法从仿射变换的数学模型出发,利用仿射前后图像质心位置的不变特性,计算Harris角点与不变质心的欧氏距离作为特征向量,最后经编码量化产生图像Hash.实验结果表明:本算法对视觉可接受的JPEG压缩、滤波等具有良好的鲁棒性,而恶意扰动或篡改则会改变Hash值.密钥的使用保证了Hash的安全性.%A novel image Hash algorithm using Harris corners and invariant centroid is proposed. Originating from the mathematical model of affine transform, the distances between Harris corners and invariant centroid are calculated as feature vector,which based on the invariance of centroid of images in the affine transform,finally the feature vectors are compressed to generate the image Hash. Experimental results show that the proposed scheme is robust against perceptually acceptable modifications to the image such as JPEG compression, filtering, while sensitive to excessive changes and malicious tampering. Security of the Hash is guaranteed by using secret keys.

  3. Cryptographic Protocols Based on Root Extracting

    DEFF Research Database (Denmark)

    Koprowski, Maciej

    In this thesis we design new cryptographic protocols, whose security is based on the hardness of root extracting or more speci cally the RSA problem. First we study the problem of root extraction in nite Abelian groups, where the group order is unknown. This is a natural generalization of the...... complexity of root extraction, even if the algorithm can choose the "public exponent'' itself. In other words, both the standard and the strong RSA assumption are provably true w.r.t. generic algorithms. The results hold for arbitrary groups, so security w.r.t. generic attacks follows for any cryptographic...... construction based on root extracting. As an example of this, we modify Cramer-Shoup signature scheme such that it becomes a genericm algorithm. We discuss then implementing it in RSA groups without the original restriction that the modulus must be a product of safe primes. It can also be implemented in class...

  4. Symmetric cryptographic protocols

    CERN Document Server

    Ramkumar, Mahalingam

    2014-01-01

    This book focuses on protocols and constructions that make good use of symmetric pseudo random functions (PRF) like block ciphers and hash functions - the building blocks for symmetric cryptography. Readers will benefit from detailed discussion of several strategies for utilizing symmetric PRFs. Coverage includes various key distribution strategies for unicast, broadcast and multicast security, and strategies for constructing efficient digests of dynamic databases using binary hash trees.   •        Provides detailed coverage of symmetric key protocols •        Describes various applications of symmetric building blocks •        Includes strategies for constructing compact and efficient digests of dynamic databases

  5. Optimal hash functions for approximate closest pairs on the n-cube

    CERN Document Server

    Gordon, Daniel M; Ostapenko, Peter

    2008-01-01

    One way to find closest pairs in large datasets is to use hash functions. In recent years locality-sensitive hash functions for various metrics have been given: projecting an n-cube onto k bits is simple hash function that performs well. In this paper we investigate alternatives to projection. For various parameters hash functions given by complete decoding algorithms for codes work better, and asymptotically random codes perform better than projection.

  6. Cryptographic Cloud Storage

    Science.gov (United States)

    Kamara, Seny; Lauter, Kristin

    We consider the problem of building a secure cloud storage service on top of a public cloud infrastructure where the service provider is not completely trusted by the customer. We describe, at a high level, several architectures that combine recent and non-standard cryptographic primitives in order to achieve our goal. We survey the benefits such an architecture would provide to both customers and service providers and give an overview of recent advances in cryptography motivated specifically by cloud storage.

  7. An Improved Perceptual Hashing Algorithm of Image Based on Discrete Cosine Transform and Speeded Up Robust Features%一种由DCT和SURF改进的图像感知哈希算法

    Institute of Scientific and Technical Information of China (English)

    丁旭; 何建忠

    2014-01-01

    In order to solve the problem of poor robustness of global features and high complexity of local features in image perceptual hashing algorithm,the author proposed an improved perceptual hashing algorithm based on DCT and SURF.Using DCT as global fea tures and SURF as local features,the author gave the hashing functions and the fusion of two features.Then,the author provided the application in image authentication.Experimental results showed that this algorithm has good robustness and efficiency.%针对感知哈希技术中图像全局特征鲁棒性低和局部特征算法复杂度高的特点,提出一种由离散余弦变换(DCT)和SURF算子改进的感知哈希算法.本文以DCT为全局特征,以SURF描述子为局部特征,分别给出了两者的哈希编码算法及两者的融合方式,接着给出在图像认证时的算法流程.实验表明本文算法具有较好的鲁棒性和实时性.

  8. CRYPTOGRAPHIC PROTOCOL DEPENDING ON BIOMETRIC AUTHENTICATION

    Directory of Open Access Journals (Sweden)

    SANJUKTA PAL

    2013-02-01

    Full Text Available In modern age, security is the most challenging issue for using the secure data used by computer. This cryptographic protocol on biometric authentication is nothing but the combination of cryptography and biometric authentication. Here the same idea of cryptography is working (i.e. using key, conversion of plain textinto cipher text called encryption and the reverse, means cipher text to plain text called decryption. Here the most promising method fingerprint geometry of biometric authentication is used as the key for encryption and decryption. Here this cryptographic protocol is just an algorithm for matching the key means matching of stored fingerprint images say DB Images with further given fingerprint image say Final Image. For matching purpose we used binary conversion of images. This algorithm is suitable for any type of data (means text data, multimedia data etc.

  9. Cryptographic Applications using FPGA Technology

    Directory of Open Access Journals (Sweden)

    Alexandru Coman

    2011-03-01

    Full Text Available Cryptographic systems have become a part of our daily life through the need of security of many common activities such as communication, payments, data transfers etc. The best support in design and implementation of cryptographic applications is offered by embedded systems such as ASICs and FPGAs. In the past few years, the increase in performance of FPGAs has made them key components in implementing cryptographic systems. One of the most important parts of the cryptographic systems is the random number generator used. Combinations of PRNG and TRNG are commonly used. A good and efficient TRNG implementation is very important and can be achieved through FPGA technology.

  10. Conception and limits of robust perceptual hashing: towards side information assisted hash functions

    Science.gov (United States)

    Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Beekhof, Fokko; Pun, Thierry

    2009-02-01

    In this paper, we consider some basic concepts behind the design of existing robust perceptual hashing techniques for content identification. We show the limits of robust hashing from the communication perspectives as well as propose an approach that is able to overcome these shortcomings in certain setups. The consideration is based on both achievable rate and probability of error. We use the fact that most robust hashing algorithms are based on dimensionality reduction using random projections and quantization. Therefore, we demonstrate the corresponding achievable rate and probability of error based on random projections and compare with the results for the direct domain. The effect of dimensionality reduction is studied and the corresponding approximations are provided based on the Johnson-Lindenstrauss lemma. Side-information assisted robust perceptual hashing is proposed as a solution to the above shortcomings.

  11. Block-based image hashing with restricted blocking strategy for rotational robustness

    Science.gov (United States)

    Xiang, Shijun; Yang, Jianquan

    2012-12-01

    Image hashing is a potential solution for image content authentication (a desired image hashing algorithm should be robust to common image processing operations and various geometric distortions). In the literature, researchers pay more attention to block-based image hashing algorithms due to their robustness to common image processing operations (such as lossy compression, low-pass filtering, and additive noise). However, the block-based hashing strategies are sensitive to rotation processing operations. This indicates that the robustness of the block-based hashing methods against rotation operations is an important issue. Towards this direction, in this article we propose a restricted blocking strategy by investigating effect of two rotation operations on an image and its blocks in both theoretical and experimental ways. Furthermore, we apply the proposed blocking strategy for the recently reported non-negative matrix factorization (NMF) hashing. Experimental results have demonstrated the validity of the block-based hashing algorithms with restricted blocking strategy for rotation operations.

  12. New Eavesdropper Detection Method in Quantum Cryptograph

    Directory of Open Access Journals (Sweden)

    Cătălin Anghel

    2011-12-01

    Full Text Available ecurity of quantum cryptographic algorithms is one of the main research directions in quantum cryptography. Security growth of the quantum key distribution systems can be realized by detecting the eavesdropper quickly, precisely and without letting any secret information in the hands of the enemy. This paper proposes a new method, named QBTT, to detect the enemy who try to tap the communication channel. The QBTT method can be implemented in every type of quantum key distribution scheme.

  13. Accelerating read mapping with FastHASH.

    Science.gov (United States)

    Xin, Hongyi; Lee, Donghyuk; Hormozdiari, Farhad; Yedkar, Samihan; Mutlu, Onur; Alkan, Can

    2013-01-01

    With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.

  14. GB-hash : Hash Functions Using Groebner Basis

    CERN Document Server

    Dey, Dhananjoy; Sengupta, Indranath

    2010-01-01

    In this paper we present an improved version of HF-hash, viz., GB-hash : Hash Functions Using Groebner Basis. In case of HF-hash, the compression function consists of 32 polynomials with 64 variables which were taken from the first 32 polynomials of hidden field equations challenge-1 by forcing last 16 variables as 0. In GB-hash we have designed the compression function in such way that these 32 polynomials with 64 variables form a minimal Groebner basis of the ideal generated by them with respect to graded lexicographical (grlex) ordering as well as with respect to graded reverse lexicographical (grevlex) ordering. In this paper we will prove that GB-hash is more secure than HF-hash as well as more secure than SHA-256. We have also compared the efficiency of our GB-hash with SHA-256 and HF-hash.

  15. Multiparty Quantum Cryptographic Protocol

    Institute of Scientific and Technical Information of China (English)

    M. Ramzan; M. K. Khan

    2008-01-01

    We propose a multiparty quantum cryptographic protocol. Unitary operators applied by Bob and Charlie, on their respective qubits of a tripartite entangled state encoding a classical symbol that can be decoded at Alice's end with the help of a decoding matrix. Eve's presence can be detected by the disturbance of the decoding matrix. Our protocol is secure against intercept-resend attacks. Furthermore, it is efficient and deterministic in the sense that two classical bits can be transferred per entangled pair of qubits. It is worth mentioning that in this protocol, the same symbol can be used for key distribution and Eve's detection that enhances the effciency of the protocol.

  16. On preserving robustness-false alarm tradeoff in media hashing

    Science.gov (United States)

    Roy, S.; Zhu, X.; Yuan, J.; Chang, E.-C.

    2007-01-01

    This paper discusses one of the important issues in generating a robust media hash. Robustness of a media hashing algorithm is primarily determined by three factors, (1) robustness-false alarm tradeoff achieved by the chosen feature representation, (2) accuracy of the bit extraction step and (3) the distance measure used to measure similarity (dissimilarity) between two hashes. The robustness-false alarm tradeoff in feature space is measured by a similarity (dissimilarity) measure and it defines a limit on the performance of the hashing algorithm. The distance measure used to compute the distance between the hashes determines how far this tradeoff in the feature space is preserved through the bit extraction step. Hence the bit extraction step is crucial, in defining the robustness of a hashing algorithm. Although this is recognized as an important requirement by all, to our knowledge there is no work in the existing literature that elucidates the effcacy of their algorithm based on their effectiveness in improving this tradeoff compared to other methods. This paper specifically demonstrates the kind of robustness false alarm tradeoff achieved by existing methods and proposes a method for hashing that clearly improves this tradeoff.

  17. Security Requirements for Cryptographic Modules

    Science.gov (United States)

    1999-01-01

    module interfaces; roles, services, and authentication; finite state machine model ; physical security; operating system security; cryptographic key...15 4.4 Finite State Machine Model .......................................................................................................... 17...These areas include cryptographic module specification; module interfaces; roles, services, and authentication; finite state machine model ; physical

  18. Cryptographic Key Management System

    Energy Technology Data Exchange (ETDEWEB)

    No, author

    2014-02-21

    This report summarizes the outcome of U.S. Department of Energy (DOE) contract DE-OE0000543, requesting the design of a Cryptographic Key Management System (CKMS) for the secure management of cryptographic keys for the energy sector infrastructure. Prime contractor Sypris Electronics, in collaboration with Oak Ridge National Laboratories (ORNL), Electric Power Research Institute (EPRI), Valicore Technologies, and Purdue University's Center for Education and Research in Information Assurance and Security (CERIAS) and Smart Meter Integration Laboratory (SMIL), has designed, developed and evaluated the CKMS solution. We provide an overview of the project in Section 3, review the core contributions of all contractors in Section 4, and discuss bene ts to the DOE in Section 5. In Section 6 we describe the technical construction of the CKMS solution, and review its key contributions in Section 6.9. Section 7 describes the evaluation and demonstration of the CKMS solution in different environments. We summarize the key project objectives in Section 8, list publications resulting from the project in Section 9, and conclude with a discussion on commercialization in Section 10 and future work in Section 11.

  19. Security analysis of robust perceptual hashing

    Science.gov (United States)

    Koval, Oleksiy; Voloshynovskiy, Sviatoslav; Beekhof, Fokko; Pun, Thierry

    2008-02-01

    In this paper we considered the problem of security analysis of robust perceptual hashing in authentication application. The main goal of our analysis was to estimate the amount of trial efforts of the attacker, who is acting within the Kerckhoffs security principle, to reveal a secret key. For this purpose, we proposed to use Shannon equivocation that provides an estimate of complexity of the key search performed based on all available prior information and presented its application to security evaluation of particular robust perceptual hashing algorithms.

  20. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX......-hash-then-sign signature scheme, one has to solve a cryptanalytical task which is related to finding second preimages for the hash function. In this article, we will show how to use Dean's method of finding expandable messages for finding a second preimage in the Merkle-Damg{\\aa}rd hash function to existentially forge...

  1. Wireless Secured Data Transmission using Cryptographic Techniques through FPGA

    Directory of Open Access Journals (Sweden)

    I.Rama Satya Nageswara Rao

    2016-02-01

    Full Text Available The need to protect the data disturbances and unauthorized access in communication has led to development of several cryptographic algorithms. Current issue in modern world as popularity of internet, e-commerce and communication technologies has emerging and they became the medium to security threats. Due to advancement in cryptographic techniques the DNA technique is a new crypto algorithm to encrypt and decrypt data. It consists of two stage encryption based on DNA sequence enhances the data security compared to conventional methods. In encryption process the former stage will encrypt the data (plain text with a random key generated by random DNA sequence generator. Latter and final stage the encrypted data is re-encrypted with DNA translation to generate cipher. The cryptographic techniques (symmetric algorithm is designed and simulated using Xilinx ISE and targeted on Spartan-3E FPGA interfaced with ZigBee for wireless communication.

  2. On hash functions using checksums

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John; Knudsen, Lars Ramkilde

    2010-01-01

    We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage...... attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto'04. We also apply our...... attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash...

  3. Secure fingerprint hashes using subsets of local structures

    Science.gov (United States)

    Effland, Tom; Schneggenburger, Mariel; Schuler, Jim; Zhang, Bingsheng; Hartloff, Jesse; Dobler, Jimmy; Tulyakov, Sergey; Rudra, Atri; Govindaraju, Venu

    2014-05-01

    In order to fulfill the potential of fingerprint templates as the basis for authentication schemes, one needs to design a hash function for fingerprints that achieves acceptable matching accuracy and simultaneously has provable security guarantees, especially for parameter regimes that are needed to match fingerprints in practice. While existing matching algorithms can achieve impressive matching accuracy, they have no security guarantees. On the other hand, provable secure hash functions have bad matching accuracy and/or do not guarantee security when parameters are set to practical values. In this work, we present a secure hash function that has the best known tradeoff between security guarantees and matching accuracy. At a high level, our hash function is simple: we apply an off-the shelf hash function on certain collections of minutia points (in particular, triplets of minutia triangles"). However, to realize the potential of this scheme, we have to overcome certain theoretical and practical hurdles. In addition to the novel idea of combining clustering ideas from matching algorithms with ideas from the provable security of hash functions, we also apply an intermediate translation-invariant but rotation-variant map to the minutia points before applying the hash function. This latter idea helps improve the tradeoff between matching accuracy and matching efficiency.

  4. 一种针对磁盘完整性校验的增量hash算法%An Incremental Hash Algorithm for Hard Disk Integrity Check

    Institute of Scientific and Technical Information of China (English)

    宋宁楠; 谷大武; 侯方勇

    2009-01-01

    增量hash 函数具有传统迭代hash函数所不具备的增量性和并行性,可以使数据校验值的更新时间与该数据被修改的规模成比例.论文采用增量校验的思想,设计了一种针对磁盘完整性校验的hash函数称为iHash.该文介绍了算法的设计,描述了算法的具体实现,论证了其在抗碰撞问题上的可证明安全性,详细分析了该算法既具有一般增量hash算法的性能优势又具有之前增量hash 设计领域未曾提出的新特性,最后给出了iHash 与已有的hash函数在性能上的对比实验结果.

  5. TESS: a geometric hashing algorithm for deriving 3D coordinate templates for searching structural databases. Application to enzyme active sites.

    Science.gov (United States)

    Wallace, A C; Borkakoti, N; Thornton, J M

    1997-11-01

    It is well established that sequence templates such as those in the PROSITE and PRINTS databases are powerful tools for predicting the biological function and tertiary structure for newly derived protein sequences. The number of X-ray and NMR protein structures is increasing rapidly and it is apparent that a 3D equivalent of the sequence templates is needed. Here, we describe an algorithm called TESS that automatically derives 3D templates from structures deposited in the Brookhaven Protein Data Bank. While a new sequence can be searched for sequence patterns, a new structure can be scanned against these 3D templates to identify functional sites. As examples, 3D templates are derived for enzymes with an O-His-O "catalytic triad" and for the ribonucleases and lysozymes. When these 3D templates are applied to a large data set of nonidentical proteins, several interesting hits are located. This suggests that the development of a 3D template database may help to identify the function of new protein structures, if unknown, as well as to design proteins with specific functions.

  6. Topological quantum gate construction by iterative pseudogroup hashing

    Science.gov (United States)

    Burrello, Michele; Mussardo, Giuseppe; Wan, Xin

    2011-02-01

    We describe the hashing technique for obtaining a fast approximation of a target quantum gate in the unitary group SU(2) represented by a product of the elements of a universal basis. The hashing exploits the structure of the icosahedral group (or other finite subgroups of SU(2)) and its pseudogroup approximations to reduce the search within a small number of elements. One of the main advantages of the pseudogroup hashing is the possibility of iterating to obtain more accurate representations of the targets in the spirit of the renormalization group approach. We describe the iterative pseudogroup hashing algorithm using the universal basis given by the braidings of Fibonacci anyons. An analysis of the efficiency of the iterations based on the random matrix theory indicates that the runtime and braid length scale poly-logarithmically with the final error, comparing favorably to the Solovay-Kitaev algorithm.

  7. Topological quantum gate construction by iterative pseudogroup hashing

    Energy Technology Data Exchange (ETDEWEB)

    Burrello, Michele; Mussardo, Giuseppe [International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste (Italy); Wan Xin, E-mail: burrello@sissa.it, E-mail: mussardo@sissa.it [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of)

    2011-02-15

    We describe the hashing technique for obtaining a fast approximation of a target quantum gate in the unitary group SU(2) represented by a product of the elements of a universal basis. The hashing exploits the structure of the icosahedral group (or other finite subgroups of SU(2)) and its pseudogroup approximations to reduce the search within a small number of elements. One of the main advantages of the pseudogroup hashing is the possibility of iterating to obtain more accurate representations of the targets in the spirit of the renormalization group approach. We describe the iterative pseudogroup hashing algorithm using the universal basis given by the braidings of Fibonacci anyons. An analysis of the efficiency of the iterations based on the random matrix theory indicates that the runtime and braid length scale poly-logarithmically with the final error, comparing favorably to the Solovay-Kitaev algorithm.

  8. On hash functions using checksums

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John; Knudsen, Lars Ramkilde

    2008-01-01

    We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage...... attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno, and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto'04. We also apply...... our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least $2^{26}$ and $2^{54}$, respectively....

  9. Explicit and Efficient Hash Families Suffice for Cuckoo Hashing with a Stash

    CERN Document Server

    Aumüller, Martin; Woelfel, Philipp

    2012-01-01

    It is shown that for cuckoo hashing with a stash as proposed by Kirsch, Mitzenmacher, and Wieder (2008) families of very simple hash functions can be used, maintaining the favorable performance guarantees: with stash size $s$ the probability of a rehash is $O(1/n^{s+1})$, and the evaluation time is $O(s)$. Instead of the full randomness needed for the analysis of Kirsch et al. and of Kutzelnigg (2010) (resp. $\\Theta(\\log n)$-wise independence for standard cuckoo hashing) the new approach even works with 2-wise independent hash families as building blocks. Both construction and analysis build upon the work of Dietzfelbinger and Woelfel (2003). The analysis, which can also be applied to the fully random case, utilizes a graph counting argument and is much simpler than previous proofs. As a byproduct, an algorithm for simulating uniform hashing is obtained. While it requires about twice as much space as the most space efficient solutions, it is attractive because of its simple and direct structure.

  10. Dynamic External Hashing: The Limit of Buffering

    CERN Document Server

    Wei, Zhewei; Zhang, Qin

    2008-01-01

    Hash tables are one of the most fundamental data structures in computer science, in both theory and practice. They are especially useful in external memory, where their query performance approaches the ideal cost of just one disk access. Knuth gave an elegant analysis showing that with some simple collision resolution strategies such as linear probing or chaining, the expected average number of disk I/Os of a lookup is merely $1+1/2^{\\Omega(b)}$, where each I/O can read a disk block containing $b$ items. Inserting a new item into the hash table also costs $1+1/2^{\\Omega(b)}$ I/Os, which is again almost the best one can do if the hash table is entirely stored on disk. However, this assumption is unrealistic since any algorithm operating on an external hash table must have some internal memory (at least $\\Omega(1)$ blocks) to work with. The availability of a small internal memory buffer can dramatically reduce the amortized insertion cost to $o(1)$ I/Os for many external memory data structures. In this paper we...

  11. One-way hash function construction based on the spatiotemporal chaotic system

    Institute of Scientific and Technical Information of China (English)

    Luo Yu-Ling; Du Ming-Hui

    2012-01-01

    Based on the spatiotemporal chaotic system,a novel algorithm for constructing a one-way hash function is proposed and analysed.The message is divided into fixed length blocks.Each message block is processed by the hash compression function in parallel.The hash compression is constructed based on the spatiotemporal chaos.In each message block,the ASCII code and its position in the whole message block chain constitute the initial conditions and the key of the hash compression function.The final hash value is generated by further compressing the mixed result of all the hash compression values.Theoretic analyses and numerical simulations show that the proposed algorithm presents high sensitivity to the message and key,good statistical properties,and strong collision resistance.

  12. Automated Techniques for Hash Function and Block Cipher Cryptanalysis (Automatische technieken voor hashfunctie- en blokcijfercryptanalyse)

    OpenAIRE

    2012-01-01

    Cryptography is the study of mathematical techniques that ensure the confidentiality and integrity of information. This relatively new field started out as classified military technology, but has now become commonplace in our daily lives. Cryptography is not only used in banking cards, secure websites and electronic signatures, but also in public transport cards, car keys and garage door openers.Two building blocks in the domain of cryptography are block ciphers and (cryptographic) hash funct...

  13. An Ad Hoc Adaptive Hashing Technique forNon-Uniformly Distributed IP Address Lookup in Computer Networks

    Directory of Open Access Journals (Sweden)

    Christopher Martinez

    2007-02-01

    Full Text Available Hashing algorithms long have been widely adopted to design a fast address look-up process which involves a search through a large database to find a record associated with a given key. Hashing algorithms involve transforming a key inside each target data to a hash value hoping that the hashing would render the database a uniform distribution with respect to this new hash value. The close the final distribution is to uniform, the less search time would be required when a query is made. When the database is already key-wise uniformly distributed, any regular hashing algorithm, such as bit-extraction, bit-group XOR, etc., would easily lead to a statistically perfect uniform distribution after the hashing. On the other hand, if records in the database are instead not uniformly distributed as in almost all known practical applications, then even different regular hash functions would lead to very different performance. When the target database has a key with a highly skewed distributed value, performance delivered by regular hashing algorithms usually becomes far from desirable. This paper aims at designing a hashing algorithm to achieve the highest probability in leading to a uniformly distributed hash result from a non-uniformly distributed database. An analytical pre-process on the original database is first performed to extract critical information that would significantly benefit the design of a better hashing algorithm. This process includes sorting on the bits of the key to prioritize the use of them in the XOR hashing sequence, or in simple bit extraction, or even a combination of both. Such an ad hoc hash design is critical to adapting to all real-time situations when there exists a changing (and/or expanding database with an irregular non-uniform distribution. Significant improvement from simulation results is obtained in randomly generated data as well as real data.

  14. A scalable lock-free hash table with open addressing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Puge; Karlsson, Sven

    2016-01-01

    Concurrent data structures synchronized with locks do not scale well with the number of threads. As more scalable alternatives, concurrent data structures and algorithms based on widely available, however advanced, atomic operations have been proposed. These data structures allow for correct...... and concurrent operations without any locks. In this paper, we present a new fully lock-free open addressed hash table with a simpler design than prior published work. We split hash table insertions into two atomic phases: first inserting a value ignoring other concurrent operations, then in the second phase...... misses respectively, leading to 21% fewer memory stall cycles. Our experiments show that our hash table scales close to linearly with the number of threads and outperforms, in throughput, other lock-free hash tables by 19%...

  15. Superposition Attacks on Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Funder, Jakob Løvstad; Nielsen, Jesper Buus

    2011-01-01

    Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece...... of information. In this paper, we introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security...

  16. One-way hash function based on hyper-chaotic cellular neural network

    Institute of Scientific and Technical Information of China (English)

    Yang Qun-Ting; Gao Tie-Gang

    2008-01-01

    The design of an efficient one-way hash function with good performance is a hot spot in modern cryptography researches. In this paper, a hash function construction method based on cell neural network with hyper-chaos characteristics is proposed. First, the chaos sequence is gotten by iterating cellular neural network with Runge-Kutta algorithm, and then the chaos sequence is iterated with the message. The hash code is obtained through the corresponding transform of the latter chaos sequence. Simulation and analysis demonstrate that the new method has the merit of convenience, high sensitivity to initial values, good hash performance, especially the strong stability.

  17. Adaptive Image Encryption Algorithm Based on Chaos Theory and Hash Function%基于混沌理论和Hash函数的自适应图像加密算法

    Institute of Scientific and Technical Information of China (English)

    赵希奇; 柏逢明; 吕贵花

    2014-01-01

    In this paper, a image encryption algorithm based on chaos theory and Hash functions are proposed for achive the digital image encryption. The scrambling transformation pixel matrix of image is got from extraction chaotic signal and Hash function by using the algorithm,then the adaptive diffusion for image gray scale is carried out by using piecewise Logistic mapping. This algorithm has large key space;the statistical attack capability is strong and effective against entropy attack;secret key sensitivity is strong,good performance;the corresponding security level can be met.%为了实现对数字图像的加密,提出了一种基于混沌理论和Hash函数的自适应图像加密算法。该算法用抽取的Lorenz混沌信号及Hash函数得到像素置乱矩阵,并对图像的像素进行置乱,利用分段Logistic映射对图像灰度进行自适应扩散。理论分析和仿真实验结果表明,该算法具有密钥空间大、抗统计攻击能力强、有效抵抗熵攻击、秘钥敏感性强等良好的性能,能够达到相应的安全水平。

  18. Pythagorean Triples and Cryptographic Coding

    CERN Document Server

    Kak, Subhash

    2010-01-01

    This paper summarizes basic properties of PPTs and shows that each PPT belongs to one of six different classes. Mapping an ordered sequence of PPTs into a corresponding sequence of these six classes makes it possible to use them in cryptography. We pose problems whose solution would facilitate such cryptographic application.

  19. Cryptographically enforced search pattern hiding

    NARCIS (Netherlands)

    Bösch, Christoph Tobias

    2015-01-01

    Searchable encryption is a cryptographic primitive that allows a client to out- source encrypted data to an untrusted storage provider, while still being able to query the data without decrypting. To allow the server to perform the search on the encrypted data, a so-called trapdoor is generated by t

  20. Probability Distributions over Cryptographic Protocols

    Science.gov (United States)

    2009-06-01

    exception. Cryptyc integrates use of pattern- matching in the spi calculus framework , which in turn allows the specification of nested cryptographic...programs too: the metaheuristic search for security protocols,” Information and Software Technology, vol. 43, pp. 891– 904, December 2001. 131 [9] X

  1. Cryptographic Protocols Based on Root Extracting

    DEFF Research Database (Denmark)

    Koprowski, Maciej

    In this thesis we design new cryptographic protocols, whose security is based on the hardness of root extracting or more speci cally the RSA problem. First we study the problem of root extraction in nite Abelian groups, where the group order is unknown. This is a natural generalization of the...... construction based on root extracting. As an example of this, we modify Cramer-Shoup signature scheme such that it becomes a genericm algorithm. We discuss then implementing it in RSA groups without the original restriction that the modulus must be a product of safe primes. It can also be implemented in class......,  providing a currently acceptable level of security. This allows us to propose the rst practical blind signature scheme provably secure, without relying on heuristics called random oracle model (ROM). We obtain the protocol for issuing blind signatures by implementing our modi ed Fischlin's signing algorithm...

  2. Geometric hashing and object recognition

    Science.gov (United States)

    Stiller, Peter F.; Huber, Birkett

    1999-09-01

    We discuss a new geometric hashing method for searching large databases of 2D images (or 3D objects) to match a query built from geometric information presented by a single 3D object (or single 2D image). The goal is to rapidly determine a small subset of the images that potentially contain a view of the given object (or a small set of objects that potentially match the item in the image). Since this must be accomplished independent of the pose of the object, the objects and images, which are characterized by configurations of geometric features such as points, lines and/or conics, must be treated using a viewpoint invariant formulation. We are therefore forced to characterize these configurations in terms of their 3D and 2D geometric invariants. The crucial relationship between the 3D geometry and its 'residual' in 2D is expressible as a correspondence (in the sense of algebraic geometry). Computing a set of generating equations for the ideal of this correspondence gives a complete characterization of the view of independent relationships between an object and all of its possible images. Once a set of generators is in hand, it can be used to devise efficient recognition algorithms and to give an efficient geometric hashing scheme. This requires exploiting the form and symmetry of the equations. The result is a multidimensional access scheme whose efficiency we examine. Several potential directions for improving this scheme are also discussed. Finally, in a brief appendix, we discuss an alternative approach to invariants for generalized perspective that replaces the standard invariants by a subvariety of a Grassmannian. The advantage of this is that one can circumvent many annoying general position assumptions and arrive at invariant equations (in the Plucker coordinates) that are more numerically robust in applications.

  3. Robust image hashing based on random Gabor filtering and dithered lattice vector quantization.

    Science.gov (United States)

    Li, Yuenan; Lu, Zheming; Zhu, Ce; Niu, Xiamu

    2012-04-01

    In this paper, we propose a robust-hash function based on random Gabor filtering and dithered lattice vector quantization (LVQ). In order to enhance the robustness against rotation manipulations, the conventional Gabor filter is adapted to be rotation invariant, and the rotation-invariant filter is randomized to facilitate secure feature extraction. Particularly, a novel dithered-LVQ-based quantization scheme is proposed for robust hashing. The dithered-LVQ-based quantization scheme is well suited for robust hashing with several desirable features, including better tradeoff between robustness and discrimination, higher randomness, and secrecy, which are validated by analytical and experimental results. The performance of the proposed hashing algorithm is evaluated over a test image database under various content-preserving manipulations. The proposed hashing algorithm shows superior robustness and discrimination performance compared with other state-of-the-art algorithms, particularly in the robustness against rotations (of large degrees).

  4. A novel method for one-way hash function construction based on spatiotemporal chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren Haijun [College of Software Engineering, Chongqing University, Chongqing 400044 (China); State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China)], E-mail: jhren@cqu.edu.cn; Wang Yong; Xie Qing [Key Laboratory of Electronic Commerce and Logistics of Chongqing, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Yang Huaqian [Department of Computer and Modern Education Technology, Chongqing Education of College, Chongqing 400067 (China)

    2009-11-30

    A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.

  5. One-Time Password System with Infinite Nested Hash Chains

    Science.gov (United States)

    Eldefrawy, Mohamed Hamdy; Khan, Muhammad Khurram; Alghathbar, Khaled

    Hash chains have been used as OTP generators. Lamport hashes have an intensive computation cost and a chain length restriction. A solution for signature chains addressed this by involving public key techniques, which increased the average computation cost. Although a later idea reduced the user computation by sharing it with the host, it couldn't overcome the length limitation. The scheme proposed by Chefranov to eliminate the length restriction had a deficiency in the communication cost overhead. We here present an algorithm that overcomes all of these shortcomings by involving two different nested hash chains: one dedicated to seed updating and the other used for OTP production. Our algorithm provides forward and non-restricted OTP generation. We propose a random challenge-response operation mode. We analyze our proposal from the viewpoint of security and performance compared with the other algorithms.

  6. Message Encryption Using Deceptive Text and Randomized Hashing

    Directory of Open Access Journals (Sweden)

    VAMSIKRISHNA YENIKAPATI,

    2011-02-01

    Full Text Available In this paper a new approach for message encryption using the concept called deceptive text is proposed.In this scheme we don’t need send encrypted plain text to receiver, instead, we send a meaningful deceptive text and an encrypted special index file to message receiver.The original message is embedded in the meaningful deceptive text.The positions of the characters of the plain text in thedeceptive text are stored in the index file.The receiver decrypts the index file and gets back the original message from the received deceptive text. Authentication is achieved by verifying the hash value of the plaintext created by the Message Digest Algorithm at the receiver side.In order to prevent collision attcks on hashing algorithms that are intended for use with standard digital signature algorithms we provide an extra layer of security using randomized hashing method.

  7. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX...

  8. Cryptographic Combinatorial Clock-Proxy Auctions

    Science.gov (United States)

    Parkes, David C.; Rabin, Michael O.; Thorpe, Christopher

    We present a cryptographic protocol for conducting efficient, provably correct and secrecy-preserving combinatorial clock-proxy auctions. The “clock phase” functions as a trusted auction despite price discovery: bidders submit encrypted bids, and prove for themselves that they meet activity rules, and can compute total demand and thus verify price increases without revealing any information about individual demands. In the sealed-bid “proxy phase”, all bids are revealed the auctioneer via time-lapse cryptography and a branch-and-bound algorithm is used to solve the winner-determination problem. Homomorphic encryption is used to prove the correctness of the solution, and establishes the correctness of the solution to any interested party. Still an NP-hard optimization problem, the use of homomorphic encryption imposes additional computational time on winner-determination that is linear in the size of the branch-and-bound search tree, and thus roughly linear in the original (search-based) computational time. The result is a solution that avoids, in the usual case, the exponential complexity of previous cryptographically-secure combinatorial auctions.

  9. 一种基于Hash函数的RFID认证改进协议%An improved hash-based RFID security authentication algorithm

    Institute of Scientific and Technical Information of China (English)

    王旭宇; 景凤宣; 王雨晴

    2014-01-01

    针对使用无线射频识别技术(RFID)进行认证时存在的安全问题,提出了一种结合Hash函数与时间戳技术的认证协议。将标签的标识和时间戳数据通过Hash函数进行加密传输并进行认证。通过BAN逻辑证明和建立协议的Petri网模型仿真实验证明了该协议具有良好的前向安全性,能有效防止重放、位置跟踪、非法访问等攻击。%To settle the potential security problems during the authentication of radio frequency identification,an authen-tication protocol combined with Hash function and time stamp was proposed.The tag’s identification and time stamp data were encrypted and transmitted through the Hash function,when they were used to authenticate.The ban logic proof and the simulative experiment of established Petri model showe the protocol has good forward security and can prevent replay,location tracking,illegal reading and other illegal attacks.

  10. Topological Quantum Hashing with the Icosahedral Group

    Science.gov (United States)

    Burrello, Michele; Xu, Haitan; Mussardo, Giuseppe; Wan, Xin

    2010-04-01

    We study an efficient algorithm to hash any single-qubit gate into a braid of Fibonacci anyons represented by a product of icosahedral group elements. By representing the group elements by braid segments of different lengths, we introduce a series of pseudogroups. Joining these braid segments in a renormalization group fashion, we obtain a Gaussian unitary ensemble of random-matrix representations of braids. With braids of length O(log⁡2(1/ɛ)), we can approximate all SU(2) matrices to an average error ɛ with a cost of O(log⁡(1/ɛ)) in time. The algorithm is applicable to generic quantum compiling.

  11. Graph classification algorithm based on divide and conquer strategy and Hash linked list%基于分而治之及Hash链表的图分类算法

    Institute of Scientific and Technical Information of China (English)

    孙伟; 朱正礼

    2013-01-01

    主流的图结构数据分类算法大都是基于频繁子结构挖掘策略.这一策略必然导致对全局数据空间的不断重复搜索,从而使得该领域相关算法的效率较低,无法满足特定要求.针对此类算法的不足,采用分而治之方法,设计出一种模块化数据空间和利用Hash链表存取地址及支持度的算法.将原始数据库按照规则划分为有限的子模块,利用gSpan算法对各个模块进行操作获取局部频繁子模式,再利用Hash函数将各模块挖掘结果映射出唯一存储地址,同时记录其相应支持度构成Hash链表,最后得到全局频繁子模式并构造图数据分类器.算法避免了对全局空间的重复搜索,从而大幅度提升了执行效率;也使得模块化后的数据可以一次性装入内存,从而节省了内存开销.实验表明,新算法在分类模型塑造环节的效率较之于主流图分类算法提升了1.2~3.2倍,同时分类准确率没有下降.%The mainstream graph data classification algorithms are based on frequent substructure mining strategy, which inevitably leads to searching the global data space repeatedly and hence the related algorithms have low efficiency and cannot meet specific requirements. Aiming at the disadvantages of such algorithms, firstly, the "divide and conquer" strategy is used to design a modular data space and an algorithm that use the hash linked list to store the address and the support degree. Secondly, the original database is partitioned into a limited number of sub-modules according to the rules, and the gSpan algorithm is used to handle each sub-module to get the locally frequent sub-model. Thirdly, Hash functions are used to calculate the unique memory address of the mining result of each module, and construct the Hash linked list by recording the support degree. Finally, the globally frequent sub-model is obtained and the graph data classifier is built up. The algorithm avoids searching the global space

  12. Rationality in the Cryptographic Model

    DEFF Research Database (Denmark)

    Hubacek, Pavel

    This thesis presents results in the field of rational cryptography. In the first part we study the use of cryptographic protocols to avoid mediation and binding commitment when implementing game theoretic equilibrium concepts. First, we concentrate on the limits of cryptographic cheap talk....... The second part presents a study of the problem of verifiable delegation of computation in the rational setting. We define rational arguments, an extension of the recent concept of rational proofs into the computational setting, and give a single round delegation scheme for the class NC1, of search problems...... computable by log-space uniform circuits of logarithmic depth, with a sub-linear time verifier. While our approach provides a weaker (yet arguably meaningful) guarantee of soundness, it compares favorably with each of the known delegation schemes in at least one aspect. Our protocols are simple, rely...

  13. Efficient nearest neighbors via robust sparse hashing.

    Science.gov (United States)

    Cherian, Anoop; Sra, Suvrit; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2014-08-01

    This paper presents a new nearest neighbor (NN) retrieval framework: robust sparse hashing (RSH). Our approach is inspired by the success of dictionary learning for sparse coding. Our key idea is to sparse code the data using a learned dictionary, and then to generate hash codes out of these sparse codes for accurate and fast NN retrieval. But, direct application of sparse coding to NN retrieval poses a technical difficulty: when data are noisy or uncertain (which is the case with most real-world data sets), for a query point, an exact match of the hash code generated from the sparse code seldom happens, thereby breaking the NN retrieval. Borrowing ideas from robust optimization theory, we circumvent this difficulty via our novel robust dictionary learning and sparse coding framework called RSH, by learning dictionaries on the robustified counterparts of the perturbed data points. The algorithm is applied to NN retrieval on both simulated and real-world data. Our results demonstrate that RSH holds significant promise for efficient NN retrieval against the state of the art.

  14. Raptor Codes and Cryptographic Issues

    CERN Document Server

    Malinen, Mikko

    2008-01-01

    In this paper two cryptographic methods are introduced. In the first method the presence of a certain size subgroup of persons can be checked for an action to take place. For this we use fragments of Raptor codes delivered to the group members. In the other method a selection of a subset of objects can be made secret. Also, it can be proven afterwards, what the original selection was.

  15. Researching and implementation of reconfigurable Hash chip based on FPGA

    Institute of Scientific and Technical Information of China (English)

    Yang Xiaohui; Dai Zibin; Liu Yuanfeng; Wang Ting

    2007-01-01

    The reconfigurable cryptographic chip is an integrated circuit that is designed by means of the method of reconfigurable architecture, and is used for encryption and decryption. Many different cipher algorithms can be flexibly implemented with the aid of a reconfigurable cryptographic chip and can be used in many fields. This article takes an example for the SHA-1/224/256 algorithms, and then designs a reconfigurable cryptographic chip based on the thought and method of the reconfigurable architecture. Finally, this paper gives the implementation result based on the FPGA of the family of Stratix II of Altera Corporation, and presents a good research trend for resolving the storage in hardware implementation using FPGAs.

  16. Quantum Communication Attacks on Classical Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre

    , one can show that the protocol remains secure even under such an attack. However, there are also cases where the honest players are quantum as well, even if the protocol uses classical communication. For instance, this is the case when classical multiparty computation is used as a “subroutine......” in quantum multiparty computation. Furthermore, in the future, players in a protocol may employ quantum computing simply to improve efficiency of their local computation, even if the communication is supposed to be classical. In such cases, it no longer seems clear that a quantum adversary must be limited......In the literature on cryptographic protocols, it has been studied several times what happens if a classical protocol is attacked by a quantum adversary. Usually, this is taken to mean that the adversary runs a quantum algorithm, but communicates classically with the honest players. In several cases...

  17. Internet traffic load balancing using dynamic hashing with flow volume

    Science.gov (United States)

    Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.

    2002-07-01

    Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.

  18. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  19. 序贯散列近邻法及其在光谱识别中的应用%Sequential Computation-based Hash Nearest Neighbor Algorithm and Its Application in Spectrum Classification

    Institute of Scientific and Technical Information of China (English)

    李乡儒

    2012-01-01

    The neatest neighbor (NN) method is one of the most typical methods in spectral retrieval, automatic processing and data mining. The main problem in NN is the low efficiency. Therefore,focus on the efficient implementation problem and introduce a novel and efficient algorithm SHNN (sequential computation-based hash nearest neighbor algorithm). In algorithm SHNN, firstly, decompose and recognize the spectrum flux components based on their hashing power; Secondly, the nearest neighbor is computed in PC A space based on sequential computation idea. In the second procedure,the putative nearest spectra can be reduced based on hash idea,and the un-nearest spectra can be rejected as early as possible. The contributions of this work are; 1) anovel algorithm SHNN is introduced,which improve the efficiency of the most popular spectramining method nearest neighbor significantly;2) Its application in star spectrum,normal galaxy spectrum and Qso spectrum classificationis investigated. Evaluated the efficiency of the proposed algorithms experimentally on the SDSS (Sloan Digital Sky Survey) released spectra. The experimental results show that the proposed SHNN algorithm improves the efficiency of nearest neighbor method more than 96%. The nearest neighbor is one of the most popular and typical methods in spectra mining. Therefore , this work is useful in a wide scenario of automatic spectra analysis, for example, spectra classification, spectra parameter estimation, redshift estimation based on spectra,etc.%基于近邻的方法是海量光谱数据获取、自动处理和挖掘中的一类重要方法,在应用中它们的主要问题是效率较低,为此文中提出了基于序贯计算的散列近邻法( SHNN).在SHNN中,首先使用PCA方法对光谱数据进行正交变换,使数据按照各成分的散列能力进行组织;然后在PCA空间中快速查找待识别光谱的近邻数据,在此过程中通过散列思想快速约减搜索空间,并用序贯计算法高效

  20. Homomorphic Hashing for Sparse Coefficient Extraction

    CERN Document Server

    Kaski, Petteri; Nederlof, Jesper

    2012-01-01

    We study classes of Dynamic Programming (DP) algorithms which, due to their algebraic definitions, are closely related to coefficient extraction methods. DP algorithms can easily be modified to exploit sparseness in the DP table through memorization. Coefficient extraction techniques on the other hand are both space-efficient and parallelisable, but no tools have been available to exploit sparseness. We investigate the systematic use of homomorphic hash functions to combine the best of these methods and obtain improved space-efficient algorithms for problems including LINEAR SAT, SET PARTITION, and SUBSET SUM. Our algorithms run in time proportional to the number of nonzero entries of the last segment of the DP table, which presents a strict improvement over sparse DP. The last property also gives an improved algorithm for CNF SAT with sparse projections.

  1. Connected Bit Minwise Hashing%连接位Minwise Hash算法的研究

    Institute of Scientific and Technical Information of China (English)

    袁鑫攀; 龙军; 张祖平; 罗跃逸; 张昊; 桂卫华

    2013-01-01

    Minwise Hashing has become a standard technique for estimating the similarity of the collection (e.g. , resemblance) with applications in information retrieval. While traditional Minwise hashing methods store each hashed value using 64 bits, storing only the lowest b bits of each (Minwise) hashed value (e. g. , b=\\ or 2). The 6-bit Minwise hashing algorithm can gain substantial advantages in terms of computational efficiency and storage space. Based on the 6-bit Minwise hashing theory, a connected bit Minwise hashing algorithm is proposed. The unbiased estimator of the resemblance and storage factor of connected bit Minwise hashing are provided theoretically. It could be theoretically proved that the efficiency of similarity estimation is improved by the connected bit Minwise hashing algorithm since the number of comparisons is greatly reduced without significant loss of accuracy. Several key parameters (e. g. , precision, recall and efficiency) are analyzed, and the availability of several estimators for connected bit Minwise hashing is analyzed. Theoretical analysis and experimental results demonstrate the effectiveness of this method.%在信息检索中,Minwise Hash算法用于估计集合的相似度.b位Minwise Hash则通过存储Hash值的b位来估计相似度,从而节省了存储空间和计算时间.基于b位Minwise Hash的理论框架提出了连接位Minwise Hash算法,给出了连接位的相似度无偏估计和存储因子.通过理论证明了连接位Minwisc Hash算法不需要损失很大的精度却可以成倍地减少比对的次数,提升了算法的性能.理论分析和实验验证了此方法的有效性.

  2. Algebraic Construction and Cryptographic Properties of Rijndael Substitution Box

    Directory of Open Access Journals (Sweden)

    Shristi Deva Sinha

    2012-01-01

    Full Text Available Rijndael algorithm was selected as the advanced encryption standard in 2001 after five year long security evaluation; it is well proven in terms of its strength and efficiency. The substitution box is the back bone of the cipher and its strength lies in the simplicity of its algebraic construction. The present paper is a study of the construction of Rijndael Substitution box and the effect of varying the design components on its cryptographic properties.

  3. On the Cell Probe Complexity of Membership and Perfect Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    2001-01-01

    We study two fundamental static data structure problems, membership and perfect hashing, in Yao's cell probe model. The first space and bit probe optimal worst case upper bound is given for the membership problem. We also give a new efficient membership scheme where the query algorithm makes just...

  4. A perceptual hashing method based on luminance features

    Science.gov (United States)

    Luo, Siqing

    2011-02-01

    With the rapid development of multimedia technology, content based searching and image authentication has become strong requirements. Image hashing technique has been proposed to meet them. In this paper, an RST (Rotation, Scaling, and Translation) resistant image hash algorithm is presented. In this method, the geometric distortions are extracted and adjusted by normalization. The features of the image are generated from the high-rank moments of luminance distribution. With the help of the efficient image representation capability of high-rank moments, the robustness and discrimination of proposed method are improved. The experimental results show that the proposed method is better than some existing methods in robustness under rotation attack.

  5. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  6. b-Bit Minwise Hashing for Large-Scale Linear SVM

    CERN Document Server

    Li, Ping; Konig, Christian

    2011-01-01

    In this paper, we propose to (seamlessly) integrate b-bit minwise hashing with linear SVM to substantially improve the training (and testing) efficiency using much smaller memory, with essentially no loss of accuracy. Theoretically, we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit minwise hashing matrix are all positive definite matrices (kernels). Interestingly, our proof for the positive definiteness of the b-bit minwise hashing kernel naturally suggests a simple strategy to integrate b-bit hashing with linear SVM. Our technique is particularly useful when the data can not fit in memory, which is an increasingly critical issue in large-scale machine learning. Our preliminary experimental results on a publicly available webspam dataset (350K samples and 16 million dimensions) verified the effectiveness of our algorithm. For example, the training time was reduced to merely a few seconds. In addition, our technique can be easily extended to many other linear and nonlinear machine...

  7. 频繁项集挖掘中的两种哈希树构建方法%Two Methods of Building Hash-tree for Mining Frequent Itemsets

    Institute of Scientific and Technical Information of China (English)

    杜孝平; 罗宪; 唐世渭

    2002-01-01

    Hash-tree is an important data structure used in Apriori-like algorithms for mining frequent itemsets.However, there is no study so far to guarantee the hash-tree could be built successfully every time. In this paper, wepropose a static method and a dynamic one to build the hash-tree. In the two methods, it is easy to decide the size ofhash-table, hash function and the number of itemsets stored in each leaf-node of hash-tree, and the methods ensurethat the hash-tree is built successfully in any cases.

  8. On the Security of Multivariate Hash Functions

    Institute of Scientific and Technical Information of China (English)

    LUO Yi-yuan; LAI Xue-jia

    2009-01-01

    Multivariate hash functions are a type of hash functions whose compression function is explicitly defined as a sequence of multivariate equations. Billet et al designed the hash function MQ-HASH and Ding et al proposed a similar construction. In this paper, we analyze the security of multivariate hash functions and conclude that low degree multivariate functions such as MQ-HASH are neither pseudo-random nor unpredictable. There may be trivial collisions and fixed point attacks if the parameters of the compression ftmction have been chosen. And they are also not computation-resistance, which makes MAC forgery easily.

  9. Summary Report on Rational Cryptographic Protocols

    DEFF Research Database (Denmark)

    Alwen, Joël; Cachin, Christian; Pereira, Olivier

    This report gives an overview of some of the models and techniques in rational cryptography, an emerging research area which in particular uses the methodologies and techniques of game theory to analyze cryptographic protocols and which uses cryptographic protocol theory to implement game theoretic...

  10. Cryptographer

    Science.gov (United States)

    Sullivan, Megan

    2005-01-01

    For the general public, the field of cryptography has recently become famous as the method used to uncover secrets in Dan Brown's fictional bestseller, The Da Vinci Code. But the science of cryptography has been popular for centuries--secret hieroglyphics discovered in Egypt suggest that code-making dates back almost 4,000 years. In today's…

  11. The Laws of Physics and Cryptographic Security

    CERN Document Server

    Rudolph, T

    2002-01-01

    This paper consists of musings that originate mainly from conversations with other physicists, as together we've tried to learn some cryptography, but also from conversations with a couple of classical cryptographers. The main thrust of the paper is an attempt to explore the ramifications for cryptographic security of incorporating physics into our thinking at every level. I begin by discussing two fundamental cryptographic principles, namely that security must not rely on secrecy of the protocol and that our local environment must be secure, from a physical perspective. I go on to explain why by definition a particular cryptographic task, oblivious transfer, is inconsistent with a belief in the validity of quantum mechanics. More precisely, oblivious transfer defines states and operations that do not exist in any (complex) Hilbert space. I go on to argue the fallaciousness of a "black box" approach to quantum cryptography, in which classical cryptographers just trust physicists to provide them with secure qu...

  12. AN INTERACTIVE VISUALIZATION TOOL FOR ANIMATING BEHAVIOR OF CRYPTOGRAPHIC PROTOCOLS

    Directory of Open Access Journals (Sweden)

    Mabroka Maeref

    2015-03-01

    Full Text Available Cryptography and Network Security is a difficult subject to understand, mainly because of the complexity of security protocols and the mathematical rigour required to understand encryption algorithms. Realizing the need for an interactive visualization tool to facilitate the understanding of cryptographic concepts and protocols, several tools had been developed. However, these tools cannot be easily adapted to animate different protocols. The aim of this paper is to propose an interactive visualization tool, called the Cryptographic Protocol Animator (CPAnim. The tool enables a student to specify a protocol and gain knowledge about the impact of its behavior. The protocol is specified by using a scenario-based approach and it is demonstrated as a number of scenes displaying a complete scenario. The effectiveness of this tool was tested using an empirical evaluation method. The results show that this tool was effective in meeting its learning objectives.

  13. The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases

    Directory of Open Access Journals (Sweden)

    A.T. Akinwale

    2009-05-01

    Full Text Available In this work, attempt is made to select three good hash functions which uniformly distribute hash values that permute their internal states and allow the input bits to generate different output bits. These functions are used in different levels of hash tables that are coded in Java Programming Language and a quite number of data records serve as primary data for testing the performances. The result shows that the two-level hash tables with three different hash functions give a superior performance over one-level hash table with two hash functions or zero-level hash table with one function in term of reducing the conflict keys and quick lookup for a particular element. The result assists to reduce the complexity of join operation in query language from O( n2 to O( 1 by placing larger query result, if any, in multilevel hash tables with multiple hash functions and generate shorter query result.

  14. Cross-Modality Hashing with Partial Correspondence

    OpenAIRE

    Gu, Yun; Xue, Haoyang; Yang,Jie

    2015-01-01

    Learning a hashing function for cross-media search is very desirable due to its low storage cost and fast query speed. However, the data crawled from Internet cannot always guarantee good correspondence among different modalities which affects the learning for hashing function. In this paper, we focus on cross-modal hashing with partially corresponded data. The data without full correspondence are made in use to enhance the hashing performance. The experiments on Wiki and NUS-WIDE datasets de...

  15. Weighted Hashing with Multiple Cues for Cell-Level Analysis of Histopathological Images.

    Science.gov (United States)

    Zhang, Xiaofan; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-01-01

    Recently, content-based image retrieval has been investigated for histopathological image analysis, focusing on improving the accuracy and scalability. The main motivation is to interpret a new image (i.e., query image) by searching among a potentially large-scale database of training images in real-time. Hashing methods have been employed because of their promising performance. However, most previous works apply hashing algorithms on the whole images, while the important information of histopathological images usually lies in individual cells. In addition, they usually only hash one type of features, even though it is often necessary to inspect multiple cues of cells. Therefore, we propose a probabilistic-based hashing framework to model multiple cues of cells for accurate analysis of histopathological images. Specifically, each cue of a cell is compressed as binary codes by kernelized and supervised hashing, and the importance of each hash entry is determined adaptively according to its discriminativity, which can be represented as probability scores. Given these scores, we also propose several feature fusion and selection schemes to integrate their strengths. The classification of the whole image is conducted by aggregating the results from multiple cues of all cells. We apply our algorithm on differentiating adenocarcinoma and squamous carcinoma, i.e., two types of lung cancers, using a large dataset containing thousands of lung microscopic tissue images. It achieves 90.3% accuracy by hashing and retrieving multiple cues of half-million cells.

  16. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  17. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  18. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  19. Authenticated hash tables

    DEFF Research Database (Denmark)

    Triandopoulos, Nikolaos; Papamanthou, Charalampos; Tamassia, Roberto

    2008-01-01

    Hash tables are fundamental data structures that optimally answer membership queries. Suppose a client stores n elements in a hash table that is outsourced at a remote server so that the client can save space or achieve load balancing. Authenticating the hash table functionality, i.e., verifying ...

  20. Cryptographic Protocols under Quantum Attacks

    CERN Document Server

    Lunemann, Carolin

    2011-01-01

    The realm of this thesis is cryptographic protocol theory in the quantum world. We study the security of quantum and classical protocols against adversaries that are assumed to exploit quantum effects to their advantage. Security in the quantum world means that quantum computation does not jeopardize the assumption, underlying the protocol construction. But moreover, we encounter additional setbacks in the security proofs, which are mostly due to the fact that some well-known classical proof techniques are forbidden by certain properties of a quantum environment. Interestingly, we can exploit some of the very same properties to the benefit of quantum cryptography. Thus, this work lies right at the heart of the conflict between highly potential effects but likewise rather demanding conditions in the quantum world.

  1. Quicksort, largest bucket, and min-wise hashing with limited independence

    DEFF Research Database (Denmark)

    Knudsen, Mathias Bæk Tejs; Stöckel, Morten

    2015-01-01

    Randomized algorithms and data structures are often analyzed under the assumption of access to a perfect source of randomness. The most fundamental metric used to measure how “random” a hash function or a random number generator is, is its independence: a sequence of random variables is said...... being more practical. We provide new bounds for randomized quicksort, min-wise hashing and largest bucket size under limited independence. Our results can be summarized as follows. Randomized Quicksort. When pivot elements are computed using a 5-independent hash function, Karloff and Raghavan, J.ACM’93...

  2. Preventing Real-Time Packet Classification Using Cryptographic Primitives

    Directory of Open Access Journals (Sweden)

    N.Vasumathi

    2014-03-01

    Full Text Available Jamming attacks are especially harmful when ensuring the dependability of wireless communication. Typically, jamming has been addressed under an external threat model. Adversaries with internal knowledge of protocol specifications and network secrets can launch low-effort jamming attacks that are difficult to detect and counter. The problem of selective jamming attacks in wireless networks is addressed in this work. In these attacks, the adversary is active only for a short period of time, specifically targeting messages of high importance. The advantages of selective jamming in terms of network performance degradation and adversary effort is illustrated by presenting two case studies; one is selective attack on TCP and another is on routing. The selective jamming attacks can be launched by performing real-time packet classification at the physical layer. To avoid these attacks, four schemes are developed such as All Or Nothing Transformation-Hiding Scheme (AONT-HS - pseudo message is added with message before transformation and encryption, Strong Hiding Commitment Scheme(SHCS - off-the-shelf symmetric encryption is done, Puzzle Based Hiding Scheme(PBHS- time lock and hash puzzle and Nonce based Authenticated Encryption Scheme(N-AES-Nonce is used for encryption, that prevent real-time packet classification by combining cryptographic primitives with physical-layer attributes.

  3. Spongent: A lightweight hash function

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knežević, Miroslav; Leander, Gregor

    2011-01-01

    This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations...... of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches...

  4. Robust audio hashing for audio authentication watermarking

    Science.gov (United States)

    Zmudzinski, Sascha; Steinebach, Martin

    2008-02-01

    Current systems and protocols based on cryptographic methods for integrity and authenticity verification of media data do not distinguish between legitimate signal transformation and malicious tampering that manipulates the content. Furthermore, they usually provide no localization or assessment of the relevance of such manipulations with respect to human perception or semantics. We present an algorithm for a robust message authentication code in the context of content fragile authentication watermarking to verify the integrity of audio recodings by means of robust audio fingerprinting. Experimental results show that the proposed algorithm provides both a high level of distinction between perceptually different audio data and a high robustness against signal transformations that do not change the perceived information. Furthermore, it is well suited for the integration in a content-based authentication watermarking system.

  5. K-Medoids-Based Random Biometric Pattern for Cryptographic Key Generation

    Science.gov (United States)

    Garcia-Baleon, H. A.; Alarcon-Aquino, V.; Starostenko, O.

    In this paper we report an approach for cryptographic key generation based on keystroke dynamics and the k-medoids algorithm. The stages that comprise the approach are training-enrollment and user verification. The proposed approach is able to verify the identity of individuals off-line avoiding the use of a centralized database. The performance of the proposed approach is assessed using 20 samples of keystroke dynamics from 20 different users. Simulation results show a false acceptance rate (FAR) of 5.26% and a false rejection rate (FRR) of 10%. The cryptographic key released by the proposed approach may be used in several encryption algorithms.

  6. A Rational Approach to Cryptographic Protocols

    CERN Document Server

    Caballero-Gil, P; Bruno-Castañeda, C; 10.1016/j.mcm.2006.12.013

    2010-01-01

    This work initiates an analysis of several cryptographic protocols from a rational point of view using a game-theoretical approach, which allows us to represent not only the protocols but also possible misbehaviours of parties. Concretely, several concepts of two-person games and of two-party cryptographic protocols are here combined in order to model the latters as the formers. One of the main advantages of analysing a cryptographic protocol in the game-theory setting is the possibility of describing improved and stronger cryptographic solutions because possible adversarial behaviours may be taken into account directly. With those tools, protocols can be studied in a malicious model in order to find equilibrium conditions that make possible to protect honest parties against all possible strategies of adversaries.

  7. Quantum walks public key cryptographic system

    OpenAIRE

    Vlachou, C; Rodrigues, J.; Mateus, P.; Paunković, N.; Souto, A.

    2016-01-01

    Quantum Cryptography is a rapidly developing field of research that benefits from the properties of Quantum Mechanics in performing cryptographic tasks. Quantum walks are a powerful model for quantum computation and very promising for quantum information processing. In this paper, we present a quantum public-key cryptographic system based on quantum walks. In particular, in the proposed protocol the public key is given by a quantum state generated by performing a quantum walk. We show that th...

  8. Construction and analysis of cryptographic functions

    CERN Document Server

    Budaghyan, Lilya

    2015-01-01

    This book covers novel research on construction and analysis of optimal cryptographic functions such as almost perfect nonlinear (APN), almost bent (AB), planar and bent functions. These functions have optimal resistance to linear and/or differential attacks, which are the two most powerful attacks on symmetric cryptosystems. Besides cryptographic applications, these functions are significant in many branches of mathematics and information theory including coding theory, combinatorics, commutative algebra, finite geometry, sequence design and quantum information theory. The author analyzes equ

  9. Supervised Discrete Hashing With Relaxation.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Sun, Zhenan; Tao, Dacheng; Tan, Tieniu

    2016-12-29

    Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called ''supervised discrete hashing with relaxation'' (SDHR) based on ''supervised discrete hashing'' (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target. In SDHR, the regression target is instead optimized. The optimized regression target matrix satisfies a large margin constraint for correct classification of each example. Compared with SDH, which uses the traditional zero-one matrix, SDHR utilizes the learned regression target matrix and, therefore, more accurately measures the classification error of the regression model and is more flexible. As expected, SDHR generally outperforms SDH. Experimental results on two large-scale image data sets (CIFAR-10 and MNIST) and a large-scale and challenging face data set (FRGC) demonstrate the effectiveness and efficiency of SDHR.

  10. Spongent: A lightweight hash function

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knežević, Miroslav; Leander, Gregor

    2011-01-01

    in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms...

  11. Advances in Hash Function Cryptanalysis

    NARCIS (Netherlands)

    Stevens, M.M.J.

    2012-01-01

    When significant weaknesses are found in cryptographic primitives on which the everyday security of the Internet relies, it is important that they are replaced by more secure alternatives, even if the weaknesses are only theoretical. This is clearly emphasized by our construction of a (purposely cri

  12. Fast and accurate hashing via iterative nearest neighbors expansion.

    Science.gov (United States)

    Jin, Zhongming; Zhang, Debing; Hu, Yao; Lin, Shiding; Cai, Deng; He, Xiaofei

    2014-11-01

    Recently, the hashing techniques have been widely applied to approximate the nearest neighbor search problem in many real applications. The basic idea of these approaches is to generate binary codes for data points which can preserve the similarity between any two of them. Given a query, instead of performing a linear scan of the entire data base, the hashing method can perform a linear scan of the points whose hamming distance to the query is not greater than rh , where rh is a constant. However, in order to find the true nearest neighbors, both the locating time and the linear scan time are proportional to O(∑i=0(rh)(c || i)) ( c is the code length), which increase exponentially as rh increases. To address this limitation, we propose a novel algorithm named iterative expanding hashing in this paper, which builds an auxiliary index based on an offline constructed nearest neighbor table to avoid large rh . This auxiliary index can be easily combined with all the traditional hashing methods. Extensive experimental results over various real large-scale datasets demonstrate the superiority of the proposed approach.

  13. Interframe hierarchical vector quantization using hashing-based reorganized codebook

    Science.gov (United States)

    Choo, Chang Y.; Cheng, Che H.; Nasrabadi, Nasser M.

    1995-12-01

    Real-time multimedia communication over PSTN (Public Switched Telephone Network) or wireless channel requires video signals to be encoded at the bit rate well below 64 kbits/second. Most of the current works on such very low bit rate video coding are based on H.261 or H.263 scheme. The H.263 encoding scheme, for example, consists mainly of motion estimation and compensation, discrete cosine transform, and run and variable/fixed length coding. Vector quantization (VQ) is an efficient and alternative scheme for coding at very low bit rate. One such VQ code applied to video coding is interframe hierarchical vector quantization (IHVQ). One problem of IHVQ, and VQ in general, is the computational complexity due to codebook search. A number of techniques have been proposed to reduce the search time which include tree-structured VQ, finite-state VQ, cache VQ, and hashing based codebook reorganization. In this paper, we present an IHVQ code with a hashing based scheme to reorganize the codebook so that codebook search time, and thus encoding time, can be significantly reduced. We applied the algorithm to the same test environment as in H.263 and evaluated coding performance. It turned out that the performance of the proposed scheme is significantly better than that of IHVQ without hashed codebook. Also, the performance of the proposed scheme was comparable to and often better than that of the H.263, due mainly to hashing based reorganized codebook.

  14. Security analysis of a one-way hash function based on spatiotemporal chaos

    Institute of Scientific and Technical Information of China (English)

    Wang Shi-Hong; Shan Peng-Yang

    2011-01-01

    The collision and statistical properties of a one-way hash function based on spatiotemporal chaos are investigated Analysis and simulation results indicate that collisions exist in the original algorithm and,therefore,the original algorithm is insecure and vulnerable. An improved algorithm is proposed to avoid the collisions.

  15. William Friedman, Geneticist Turned Cryptographer.

    Science.gov (United States)

    Goldman, Irwin L

    2017-05-01

    William Friedman (1891-1969), trained as a plant geneticist at Cornell University, was employed at Riverbank Laboratories by the eccentric millionaire George Fabyan to work on wheat breeding. Friedman, however, soon became intrigued by and started working on a pet project of Fabyan's involving the conjecture that Francis Bacon, a polymath known for the study of ciphers, was the real author of Shakespeare's plays. Thus, beginning in ∼1916, Friedman turned his attention to the so called "Baconian cipher," and developed decryption techniques that bore similarity to approaches for solving problems in population genetics. His most significant, indeed pathbreaking, work used ideas from genetics and statistics, focusing on analysis of the frequencies of letters in language use. Although he had transitioned from being a geneticist to a cryptographer, his earlier work had resonance in his later pursuits. He soon began working directly for the United States government and produced solutions used to solve complex military ciphers, in particular to break the Japanese Purple code during World War II. Another important legacy of his work was the establishment of the Signal Intelligence Service and eventually the National Security Agency. Copyright © 2017 by the Genetics Society of America.

  16. Algebraic Construction and Cryptographic Properties of Rijndael Substitution Box

    Directory of Open Access Journals (Sweden)

    Shristi Deva Sinha

    2012-01-01

    Full Text Available Rijndael algorithm was selected as the advanced encryption standard in 2001 after five year long security evaluation; it is well proven in terms of its strength and efficiency. The substitution box is the back bone of the cipher and its strength lies in the simplicity of its algebraic construction. The present paper is a study of the construction of Rijndael Substitution box and the effect of varying the design components on its cryptographic properties.Defence Science Journal, 2012, 62(1, pp.32-37, DOI:http://dx.doi.org/10.14429/dsj.62.1439

  17. Handwriting: Feature Correlation Analysis for Biometric Hashes

    Directory of Open Access Journals (Sweden)

    Ralf Steinmetz

    2004-04-01

    Full Text Available In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation, the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  18. Fast image search with locality-sensitive hashing and homogeneous kernels map.

    Science.gov (United States)

    Li, Jun-yi; Li, Jian-hua

    2015-01-01

    Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach's linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.

  19. Fast Image Search with Locality-Sensitive Hashing and Homogeneous Kernels Map

    Directory of Open Access Journals (Sweden)

    Jun-yi Li

    2015-01-01

    Full Text Available Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach’s linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.

  20. Locality-Sensitive Hashing for Chi2 distance.

    Science.gov (United States)

    Gorisse, David; Cord, Matthieu; Precioso, Frederic

    2012-02-01

    In the past 10 years, new powerful algorithms based on efficient data structures have been proposed to solve the problem of Nearest Neighbors search (or Approximate Nearest Neighbors search). If the Euclidean Locality Sensitive Hashing algorithm, which provides approximate nearest neighbors in a euclidean space with sublinear complexity, is probably the most popular, the euclidean metric does not always provide as accurate and as relevant results when considering similarity measure as the Earth-Mover Distance and 2 distances. In this paper, we present a new LSH scheme adapted to 2 distance for approximate nearest neighbors search in high-dimensional spaces. We define the specific hashing functions, we prove their local-sensitivity, and compare, through experiments, our method with the Euclidean Locality Sensitive Hashing algorithm in the context of image retrieval on real image databases. The results prove the relevance of such a new LSH scheme either providing far better accuracy in the context of image retrieval than euclidean scheme for an equivalent speed, or providing an equivalent accuracy but with a high gain in terms of processing speed.

  1. An update on the side channel cryptanalysis of MACs based on cryptographic hash functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2007-01-01

    into consideration. Next, we propose new hybrid NMAC/HMAC schemes for security against side channel attacks assuming that their underlying block cipher is ideal. We then show that M-NMAC, MDx-MAC and a variant of the envelope MAC scheme based on DM with an ideal block cipher are secure against DPA attacks....

  2. Compressing Neural Networks with the Hashing Trick

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to ...

  3. Type-Based Automated Verification of Authenticity in Asymmetric Cryptographic Protocols

    DEFF Research Database (Denmark)

    Dahl, Morten; Kobayashi, Naoki; Sun, Yunde

    2011-01-01

    Gordon and Jeffrey developed a type system for verification of asymmetric and symmetric cryptographic protocols. We propose a modified version of Gordon and Jeffrey's type system and develop a type inference algorithm for it, so that protocols can be verified automatically as they are, without any...

  4. Hash function based secret sharing scheme designs

    CERN Document Server

    Chum, Chi Sing

    2011-01-01

    Secret sharing schemes create an effective method to safeguard a secret by dividing it among several participants. By using hash functions and the herding hashes technique, we first set up a (t+1, n) threshold scheme which is perfect and ideal, and then extend it to schemes for any general access structure. The schemes can be further set up as proactive or verifiable if necessary. The setup and recovery of the secret is efficient due to the fast calculation of the hash function. The proposed scheme is flexible because of the use of existing hash functions.

  5. On the Insertion Time of Cuckoo Hashing

    CERN Document Server

    Fountoulakis, Nikolaos; Steger, Angelika

    2010-01-01

    Cuckoo hashing is an efficient technique for creating large hash tables with high space utilization and guaranteed constant access times. There, each item can be placed in a location given by any one out of k different hash functions. In this paper we investigate further the random walk heuristic for inserting in an online fashion new items into the hash table. Provided that k > 2 and that the number of items in the table is below (but arbitrarily close) to the theoretically achievable load threshold, we show a polylogarithmic bound for the maximum insertion time that holds with high probability.

  6. Biometric hashing for handwriting: entropy-based feature selection and semantic fusion

    Science.gov (United States)

    Scheidat, Tobias; Vielhauer, Claus

    2008-02-01

    Some biometric algorithms lack of the problem of using a great number of features, which were extracted from the raw data. This often results in feature vectors of high dimensionality and thus high computational complexity. However, in many cases subsets of features do not contribute or with only little impact to the correct classification of biometric algorithms. The process of choosing more discriminative features from a given set is commonly referred to as feature selection. In this paper we present a study on feature selection for an existing biometric hash generation algorithm for the handwriting modality, which is based on the strategy of entropy analysis of single components of biometric hash vectors, in order to identify and suppress elements carrying little information. To evaluate the impact of our feature selection scheme to the authentication performance of our biometric algorithm, we present an experimental study based on data of 86 users. Besides discussing common biometric error rates such as Equal Error Rates, we suggest a novel measurement to determine the reproduction rate probability for biometric hashes. Our experiments show that, while the feature set size may be significantly reduced by 45% using our scheme, there are marginal changes both in the results of a verification process as well as in the reproducibility of biometric hashes. Since multi-biometrics is a recent topic, we additionally carry out a first study on a pair wise multi-semantic fusion based on reduced hashes and analyze it by the introduced reproducibility measure.

  7. Improved locality-sensitive hashing method for the approximate nearest neighbor problem

    Science.gov (United States)

    Lu, Ying-Hua; Ma, Ting-Huai; Zhong, Shui-Ming; Cao, Jie; Wang, Xin; Abdullah, Al-Dhelaan

    2014-08-01

    In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.

  8. Derandomization, Hashing and Expanders

    DEFF Research Database (Denmark)

    Ruzic, Milan

    that independent and unbiased random bits are accessible. However, truly random bits are scarce in reality. In practice, pseudorandom generators are used in place of random numbers; usually, even the seed of the generator does not come from a source of true random- ness. While things mostly work well in practice...... of work in this direction. There has been a lot of work in designing general tools for simulating randomness and making deterministic versions of randomized algorithms, with some loss in time and space performance. These methods are not tied to particular algorithms, but work on large classes of problems....... The central question in this area of computational complexity is \\P=BPP?". Instead of derandomizing whole complexity classes, one may work on derandomizing concrete problems. This approach trades generality for possibility of having much better performance bounds. There are a few common techniques...

  9. Quantum walk public-key cryptographic system

    Science.gov (United States)

    Vlachou, C.; Rodrigues, J.; Mateus, P.; Paunković, N.; Souto, A.

    2015-12-01

    Quantum Cryptography is a rapidly developing field of research that benefits from the properties of Quantum Mechanics in performing cryptographic tasks. Quantum walks are a powerful model for quantum computation and very promising for quantum information processing. In this paper, we present a quantum public-key cryptographic system based on quantum walks. In particular, in the proposed protocol the public-key is given by a quantum state generated by performing a quantum walk. We show that the protocol is secure and analyze the complexity of public key generation and encryption/decryption procedures.

  10. Chaotic cryptographic scheme and its randomness evaluation

    Science.gov (United States)

    Stoyanov, B. P.

    2012-10-01

    We propose a new cryptographic scheme based on the Lorenz chaos attractor and 32 bit bent Boolean function. We evaluated the keystream generated by the scheme with batteries of the NIST statistical tests. We also applied a number of statistical analysis techniques, such as calculating histograms, correlations between two adjacent pixels, information entropy, and differential resistance, all refer to images encrypted by the proposed system. The results of the analysis show that the new cryptographic scheme ensures a secure way for sending digital data with potential applications in real-time image encryption.

  11. R-Hash: Hash Function Using Random Quadratic Polynomials Over GF(2)

    OpenAIRE

    Dhananjoy Dey; Noopur Shrotriya; Indranath Sengupta

    2013-01-01

    In this paper we describe an improved version of HF-hash [7] viz. R-hash: Hash Function Using RandomQuadratic Polynomials Over GF(2). The compression function of HF-hash consists of 32 polynomials with64 variables over GF(2), which were taken from the first 32 polynomials of HFE challenge-1 by forcinglast 16 variables as 0. The mode operation used in computing HF-hash was Merkle-Damgard. We haverandomly selected 32 quadratic non-homogeneous polynomials having 64 variables over GF(2) in case o...

  12. Spectral Multimodal Hashing and Its Application to Multimedia Retrieval.

    Science.gov (United States)

    Zhen, Yi; Gao, Yue; Yeung, Dit-Yan; Zha, Hongyuan; Li, Xuelong

    2016-01-01

    In recent years, multimedia retrieval has sparked much research interest in the multimedia, pattern recognition, and data mining communities. Although some attempts have been made along this direction, performing fast multimodal search at very large scale still remains a major challenge in the area. While hashing-based methods have recently achieved promising successes in speeding-up large-scale similarity search, most existing methods are only designed for uni-modal data, making them unsuitable for multimodal multimedia retrieval. In this paper, we propose a new hashing-based method for fast multimodal multimedia retrieval. The method is based on spectral analysis of the correlation matrix of different modalities. We also develop an efficient algorithm that learns some parameters from the data distribution for obtaining the binary codes. We empirically compare our method with some state-of-the-art methods on two real-world multimedia data sets.

  13. Perceptual image hashing via feature points: performance evaluation and tradeoffs.

    Science.gov (United States)

    Monga, Vishal; Evans, Brian L

    2006-11-01

    We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification.

  14. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  15. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX......-hash-then-sign signature scheme, one has to solve a cryptanalytical task which is related to finding second preimages for the hash function. In this article, we will show how to use Dean's method of finding expandable messages for finding a second preimage in the Merkle-Damg{\\aa}rd hash function to existentially forge...... a signature scheme based on a $t$-bit RMX-hash function which uses the Davies-Meyer compression functions (e.g., MD4, MD5, SHA family) in $2^{t/2}$ chosen messages plus $2^{t/2+1}$ off-line operations of the compression function and similar amount of memory. This forgery attack also works on the signature...

  16. Scalable partitioning and exploration of chemical spaces using geometric hashing.

    Science.gov (United States)

    Dutta, Debojyoti; Guha, Rajarshi; Jurs, Peter C; Chen, Ting

    2006-01-01

    Virtual screening (VS) has become a preferred tool to augment high-throughput screening(1) and determine new leads in the drug discovery process. The core of a VS informatics pipeline includes several data mining algorithms that work on huge databases of chemical compounds containing millions of molecular structures and their associated data. Thus, scaling traditional applications such as classification, partitioning, and outlier detection for huge chemical data sets without a significant loss in accuracy is very important. In this paper, we introduce a data mining framework built on top of a recently developed fast approximate nearest-neighbor-finding algorithm(2) called locality-sensitive hashing (LSH) that can be used to mine huge chemical spaces in a scalable fashion using very modest computational resources. The core LSH algorithm hashes chemical descriptors so that points close to each other in the descriptor space are also close to each other in the hashed space. Using this data structure, one can perform approximate nearest-neighbor searches very quickly, in sublinear time. We validate the accuracy and performance of our framework on three real data sets of sizes ranging from 4337 to 249 071 molecules. Results indicate that the identification of nearest neighbors using the LSH algorithm is at least 2 orders of magnitude faster than the traditional k-nearest-neighbor method and is over 94% accurate for most query parameters. Furthermore, when viewed as a data-partitioning procedure, the LSH algorithm lends itself to easy parallelization of nearest-neighbor classification or regression. We also apply our framework to detect outlying (diverse) compounds in a given chemical space; this algorithm is extremely rapid in determining whether a compound is located in a sparse region of chemical space or not, and it is quite accurate when compared to results obtained using principal-component-analysis-based heuristics.

  17. Feature Hashing for Large Scale Multitask Learning

    CERN Document Server

    Weinberger, Kilian; Attenberg, Josh; Langford, John; Smola, Alex

    2009-01-01

    Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.

  18. Hash3: Proofs, Analysis and Implementation

    DEFF Research Database (Denmark)

    Gauravaram, Praveen

    2009-01-01

    This report outlines the talks presented at the winter school on Hash3: Proofs, Analysis, and Implementation, ECRYPT II Event on Hash Functions. In general, speakers may not write everything what they talk on the slides. So, this report also outlines such findings following the understanding...

  19. A Verifiable Language for Cryptographic Protocols

    DEFF Research Database (Denmark)

    Nielsen, Christoffer Rosenkilde

    We develop a formal language for specifying cryptographic protocols in a structured and clear manner, which allows verification of many interesting properties; in particular confidentiality and integrity. The study sheds new light on the problem of creating intuitive and human readable languages...

  20. On the construction of cryptographically strong Boolean functions with desirable trade-off

    Institute of Scientific and Technical Information of China (English)

    REN Kui; PARK Jaemin; KIM Kwangjo

    2005-01-01

    This paper proposes a practical algorithm for systematically generating strong Boolean functions (f:GF(2)n→GF(2))with cryptographic meaning. This algorithm takes bent function as input and directly outputs the resulted Boolean function in terms of truth table sequence. This algorithm was used to develop two classes of balanced Boolean functions, one of which has very good cryptographic properties: nl(f)=22k-1-2k+2k-2 (n=2k), with the sum-of-squares avalanche characteristic off satisfying σf=24k+23k+2+23k+23k-2 and the absolute avalanche characteristic of △f satisfying △f=2k+1. This is the best result up to now compared to existing ones. Instead of bent sequences, starting from random Boolean functions was also tested in the algorithm. Experimental results showed that starting from bent sequences is highly superior to starting from random Boolean functions.

  1. Constructing a one-way hash function one-way function based on the unified Chaotic system

    Institute of Scientific and Technical Information of China (English)

    Long Min; Peng Fei; Chen Guan-Rong

    2008-01-01

    A new one-way hash function based on the unified chaotic system is constructed.With different values of a key parameter,the unified chaotic system represents different chaotic systems,based on which the one-way hash function algorithm is constructed with three round operations and an initial vector on an input message.In each round operation,the parameters are processed by three different chaotic systems generated from the unified chaotic system.Feed-forwards are used at the end of each round operation and at the end of each element of the message processing.Meanwhile,in each round operation,parameter-exchanging operations are implemented.Then,the hash value of length 160 bits is obtained from the last six parameters.Simulation and analysis both demonstrate that the algorithm has great flexibility,satisfactory hash performance,weak collision property,and high security.

  2. Symmetric cryptographic protocols for extended millionaires' problem

    Institute of Scientific and Technical Information of China (English)

    LI ShunDong; WANG DaoShun; DAI YiQi

    2009-01-01

    Yao's millionaires' problem is a fundamental problem in secure multiparty computation, and its solutions have become building blocks of many secure multiparty computation solutions. Unfortunately,most protocols for millionaires' problem are constructed based on public cryptography, and thus are inefficient. Furthermore, all protocols are designed to solve the basic millionaires' problem, that is,to privately determine which of two natural numbers is greater. If the numbers are real, existing solutions do not directly work. These features limit the extensive application of the existing protocols. This study introduces and refines the first symmetric cryptographic protocol for the basic millionaires' problem, and then extends the symmetric cryptographic protocol to privately determining which of two real numbers is greater, which are called the extended millionaires' problem, and proposes corresponding Constructed based on symmetric cryptography, these protocols are very efficient.

  3. Adaptive Steganographic Algorithm using Cryptographic Encryption RSA Algorithms

    OpenAIRE

    Sharma, Manoj Kumar; Upadhyaya, Dr. Amit; Agarwal, Shalini

    2013-01-01

    Cryptography is the art of securing information by applying encryption and decryption on transmission data which ensure that the secret can be understood only by the right person.[1] Steganography is the process of sharing information in an undetectable way by making sure that nobody else can even detect the presence of a secret. If these two methods could be combined, it would provide a fool-proof security to information being communicated over a network. This paper propose two different ste...

  4. Algebra model and security analysis for cryptographic protocols

    Institute of Scientific and Technical Information of China (English)

    HUAI Jinpeng; LI Xianxian

    2004-01-01

    More and more cryptographic protocols have been used to achieve various security requirements of distributed systems in the open network environment. However cryptographic protocols are very difficult to design and analyze due to the complexity of the cryptographic protocol execution, and a large number of problems are unsolved that range from the theory framework to the concrete analysis technique. In this paper, we build a new algebra called cryptographic protocol algebra (CPA) for describing the message operations with many cryptographic primitives, and proposed a new algebra model for cryptographic protocols based on the CPA. In the model, expanding processes of the participant's knowledge on the protocol runs are characterized with some algebraic notions such as subalgebra, free generator and polynomial algebra, and attack processes are modeled with a new notion similar to that of the exact sequence used in homological algebra. Then we develope a mathematical approach to the cryptographic protocol security analysis. By using algebraic techniques, we have shown that for those cryptographic protocols with some symmetric properties, the execution space generated by an arbitrary number of participants may boil down to a smaller space generated by several honest participants and attackers. Furthermore we discuss the composability problem of cryptographic protocols and give a sufficient condition under which the protocol composed of two correct cryptographic protocols is still correct, and we finally offer a counterexample to show that the statement may not be true when the condition is not met.

  5. MapReduce Parallel Cuckoo Hashing and Oblivious RAM Simulations

    CERN Document Server

    Goodrich, Michael T

    2010-01-01

    We present an efficient algorithm for performing cuckoo hashing in the MapReduce parallel model of computation and we show how this result in turn leads to improved methods for performing data-oblivious RAM simulations. Our contributions involve a number of seemingly unrelated new results, including: a parallel MapReduce cuckoo hashing algorithm that runs in O(log n) time and uses O(n) total work, with very high probability a reduction of data-oblivious simulation of sparse-streaming MapReduce algorithms to oblivious sorting an external-memory data-oblivious sorting algorithm using O((N/B) log^2_(M/B) (N/B)) I/Os constant-memory data-oblivious RAM simulation with O(log^2 n) amortized time overhead, with very high probability, or with expected O(log2 n) amortized time overhead and better constant factors sublinear-memory data-oblivious RAM simulation with O(n^nu) private memory and O(log n) amortized time overhead, with very high probability, for constant nu > 0. This last result is, in fact, the main result o...

  6. PACE: Proactively Secure Accumulo with Cryptographic Enforcement

    Science.gov (United States)

    2017-05-27

    modify data using digital signatures . The contributions of our work include: • Cryptographic enforcement of access control. The PACE library allows...the impact of encryption and signatures on operation throughput. I. INTRODUCTION Over the last several years, many companies have moved their...second). This evaluation demonstrates that while encryp- tion and signatures have an impact on throughput, the impact is small enough to be

  7. SMART AS A CRYPTOGRAPHIC PROCESSOR

    Directory of Open Access Journals (Sweden)

    Saroja Kanchi

    2016-05-01

    Full Text Available SMaRT is a 16-bit 2.5-address RISC-type single-cycle processor, which was recently designed and successfully mapped into a FPGA chip in our ECE department. In this paper, we use SMaRT to run the well-known encryption algorithm, Data Encryption Standard. For information security purposes, encryption is a must in today’s sophisticated and ever-increasing computer communications such as ATM machines and SIM cards. For comparison and evaluation purposes, we also map the same algorithm on the HC12, a same-size but CISC-type off-the-shelf microcontroller, Our results show that compared to HC12, SMaRT code is only 14% longer in terms of the static number of instructions but about 10 times faster in terms of the number of clock cycles, and 7% smaller in terms of code size. Our results also show that 2.5- address instructions, a SMaRT selling point, amount to 45% of the whole R-type instructions resulting in significant improvement in static number of instructions hence code size as well as performance. Additionally, we see that the SMaRT short-branch range is sufficiently wide in 90% of cases in the SMaRT code. Our results also reveal that the SMaRT novel concept of locality of reference in using the MSBs of the registers in non-subroutine branch instructions stays valid with a remarkable hit rate of 95%!

  8. TH*: Scalable Distributed Trie Hashing

    Directory of Open Access Journals (Sweden)

    Aridj Mohamed

    2010-11-01

    Full Text Available In today's world of computers, dealing with huge amounts of data is not unusual. The need to distribute this data in order to increase its availability and increase the performance of accessing it is more urgent than ever. For these reasons it is necessary to develop scalable distributed data structures. In this paper we propose a TH* distributed variant of the Trie Hashing data structure. First we propose Thsw new version of TH without node Nil in digital tree (trie, then this version will be adapted to multicomputer environment. The simulation results reveal that TH* is scalable in the sense that it grows gracefully, one bucket at a time, to a large number of servers, also TH* offers a good storage space utilization and high query efficiency special for ordering operations.

  9. A hashing technique using separate binary tree

    Directory of Open Access Journals (Sweden)

    Md Mehedi Masud

    2006-11-01

    Full Text Available It is always a major demand to provide efficient retrieving and storing of data and information in a large database system. For this purpose, many file organization techniques have already been developed, and much additional research is still going on. Hashing is one developed technique. In this paper we propose an enhanced hashing technique that uses a hash table combined with a binary tree, searching on the binary representation of a portion the primary key of records that is associated with each index of the hash table. The paper contains numerous examples to describe the technique. The technique shows significant improvements in searching, insertion, and deletion for systems with huge amounts of data. The paper also presents the mathematical analysis of the proposed technique and comparative results.

  10. On Randomizing Hash Functions to Strengthen the Security of Digital Signatures

    DEFF Research Database (Denmark)

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX...... that use Davies-Meyer schemes and a variant of RMX published by NIST in its Draft Special Publication (SP) 800-106. We discuss some important applications of our attack....

  11. SD-REE: A Cryptographic Method to Exclude Repetition from a Message

    CERN Document Server

    Dey, Somdip

    2012-01-01

    In this paper, the author presents a new cryptographic technique, SD-REE, to exclude the repetitive terms in a message, when it is to be encrypted, so that it becomes almost impossible for a person to retrieve or predict the original message from the encrypted message. In modern world, cryptography hackers try to break a code or cryptographic algorithm [1,2] or retrieve the key, used for encryption, by inserting repetitive bytes / characters in the message and encrypt the message or by analyzing repetitions in the encrypted message, to find out the encryption algorithm or retrieve the key used for the encryption. But in SD-REE method the repetitive bytes / characters are removed and there is no trace of any repetition in the message, which was encrypted.

  12. CRYPTOGRAPHIC PROTOCOLS SPECIFICATION AND VERIFICATION TOOLS - A SURVEY

    Directory of Open Access Journals (Sweden)

    Amol H Shinde

    2017-06-01

    Full Text Available Cryptographic protocols cannot guarantee the secure operations by merely using state-of-the-art cryptographic mechanisms. Validation of such protocols is done by using formal methods. Various specialized tools have been developed for this purpose and are being used to validate real life cryptographic protocols. These tools give feedback to the designers of protocols in terms of loops and attacks in protocols to improve security. In this paper, we discuss the brief history of formal methods and tools that are useful for the formal verification of the cryptographic protocols.

  13. Instance-Aware Hashing for Multi-Label Image Retrieval.

    Science.gov (United States)

    Lai, Hanjiang; Yan, Pan; Shu, Xiangbo; Wei, Yunchao; Yan, Shuicheng

    2016-06-01

    Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods.

  14. Comparison Based Analysis of Different Cryptographic and Encryption Techniques Using Message Authentication Code (MAC) in Wireless Sensor Networks (WSN)

    CERN Document Server

    Rehman, Sadaqat Ur; Ahmad, Basharat; Yahya, Khawaja Muhammad; Ullah, Anees; Rehman, Obaid Ur

    2012-01-01

    Wireless Sensor Networks (WSN) are becoming popular day by day, however one of the main issue in WSN is its limited resources. We have to look to the resources to create Message Authentication Code (MAC) keeping in mind the feasibility of technique used for the sensor network at hand. This research work investigates different cryptographic techniques such as symmetric key cryptography and asymmetric key cryptography. Furthermore, it compares different encryption techniques such as stream cipher (RC4), block cipher (RC2, RC5, RC6 etc) and hashing techniques (MD2, MD4, MD5, SHA, SHA1 etc). The result of our work provides efficient techniques for communicating device, by selecting different comparison matrices i.e. energy consumption, processing time, memory and expenses that satisfies both the security and restricted resources in WSN environment to create MAC.

  15. Comparison Based Analysis of Different Cryptographic and Encryption Techniques Using Message Authentication Code (MAC in Wireless Sensor Networks (WSN

    Directory of Open Access Journals (Sweden)

    Sadaqat Ur Rehman

    2012-01-01

    Full Text Available Wireless Sensor Networks (WSN are becoming popular day by day, however one of the main issue in WSN is its limited resources. We have to look to the resources to create Message Authentication Code (MAC and need to choose a technique which is feasible for sensor networks. This research work investigates different cryptographic techniques such as symmetric key cryptography and asymmetric key cryptography, furthermore it compares different encryption techniques such as stream cipher (RC4, block cipher (RC2, RC5, RC6 etc and hashing techniques (MD2, MD4, MD5, SHA, SHA1 etc. The result of our work provides efficient techniques for communicator, by selecting different comparison matrices i.e. energy consumption, processing time, memory and expenses that satisfies both the security and restricted resources in WSN environment to create MAC

  16. Multiple structural alignment and core detection by geometric hashing.

    Science.gov (United States)

    Leibowitz, N; Fligelman, Z Y; Nussinov, R; Wolfson, H J

    1999-01-01

    A Multiple Structural Alignment algorithm is presented. The algorithm accepts an ensemble of protein structures and finds the largest substructure (core) of C alpha atoms whose geometric configuration appear in all the molecules of the ensemble (core). Both the detection of this core and the resulting structural alignment are done simultaneously. Other large enough multistructural superimpositions are detected as well. Our method is based on the Geometric Hashing paradigm and a superimposition clustering technique which represents superimpositions by sets of matching atoms. The algorithm proved to be efficient on real data in a series of experiments. The same method can be applied to any ensemble of molecules (not necessarily proteins) since our basic technique is sequence order independent.

  17. b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning and Using GPUs for Fast Preprocessing with Simple Hash Functions

    CERN Document Server

    Li, Ping; Konig, Arnd Christian

    2012-01-01

    In this paper, we study several critical issues which must be tackled before one can apply b-bit minwise hashing to the volumes of data often used industrial applications, especially in the context of search. 1. (b-bit) Minwise hashing requires an expensive preprocessing step that computes k (e.g., 500) minimal values after applying the corresponding permutations for each data vector. We developed a parallelization scheme using GPUs and observed that the preprocessing time can be reduced by a factor of 20-80 and becomes substantially smaller than the data loading time. 2. One major advantage of b-bit minwise hashing is that it can substantially reduce the amount of memory required for batch learning. However, as online algorithms become increasingly popular for large-scale learning in the context of search, it is not clear if b-bit minwise yields significant improvements for them. This paper demonstrates that $b$-bit minwise hashing provides an effective data size/dimension reduction scheme and hence it can d...

  18. ANALISA FUNGSI HASH DALAM ENKRIPSI IDEA UNTUK KEAMANAN RECORD INFORMASI

    Directory of Open Access Journals (Sweden)

    Ramen Antonov Purba

    2014-02-01

    Full Text Available Issues of security and confidentiality of data is very important to organization or individual. If the data in a network of computers connected with a public network such as the Internet. Of course a very important data is viewed or hijacked by unauthorized persons. Because if this happens we will probably corrupted data can be lost even that will cause huge material losses. This research discusses the security system of sending messages/data using the encryption aims to maintain access of security a message from the people who are not authorized/ eligible. Because of this delivery system is very extensive security with the scope then this section is limited only parsing the IDEA Algorithm with hash functions, which include encryption, decryption. By combining the encryption IDEA methods (International Data Encryption Algorithm to encrypt the contents of the messages/data with the hash function to detect changes the content of messages/data is expected security level to be better. Results from this study a software that can perform encryption and decryption of messages/data, generate the security key based on the message/data is encrypted.

  19. A Novel Visual Cryptographic Method for Color Images

    Directory of Open Access Journals (Sweden)

    Amarjot Singh

    2013-05-01

    Full Text Available Visual cryptography is considered to be a vital technique for hiding visual data from intruders. Because of its importance, it finds applications in various sectors such as E-voting system, financial documents and copyright protections etc. A number of methods have been proposed in past for encrypting color images such as color decomposition, contrast manipulation, polynomial method, using the difference in color intensity values in a color image etc. The major flaws with most of the earlier proposed methods is the complexity encountered during the implementation of the methods on a wide scale basis, the problem of random pixilation and insertion of noise in encrypted images. This paper presents a simple and highly resistant algorithm for visual cryptography to be performed on color images. The main advantage of the proposed cryptographic algorithm is the robustness and low computational cost with structure simplicity. The proposed algorithm outperformed the conventional methods when tested over sample images proven using key analysis, SSIM and histogram analysis tests. In addition, the proposed method overshadows the standard method in terms of the signal to noise ratio obtained for the encrypted image, which is much better than the SNR value obtained using the standard method. The paper also makes a worst case analysis for the SNR values for both the methods.

  20. Exploiting the HASH Planetary Nebula Research Platform

    CERN Document Server

    Parker, Quentin A; Frew, David J

    2016-01-01

    The HASH (Hong Kong/ AAO/ Strasbourg/ H{\\alpha}) planetary nebula research platform is a unique data repository with a graphical interface and SQL capability that offers the community powerful, new ways to undertake Galactic PN studies. HASH currently contains multi-wavelength images, spectra, positions, sizes, morphologies and other data whenever available for 2401 true, 447 likely, and 692 possible Galactic PNe, for a total of 3540 objects. An additional 620 Galactic post-AGB stars, pre-PNe, and PPN candidates are included. All objects were classified and evaluated following the precepts and procedures established and developed by our group over the last 15 years. The complete database contains over 6,700 Galactic objects including the many mimics and related phenomena previously mistaken or confused with PNe. Curation and updating currently occurs on a weekly basis to keep the repository as up to date as possible until the official release of HASH v1 planned in the near future.

  1. Ant-Crypto, a Cryptographer for Data Encryption Standard

    Directory of Open Access Journals (Sweden)

    Salabat Khan

    2013-01-01

    Full Text Available Swarm Intelligence and Evolutionary Techniques are attracting the cryptanalysts in the field of cryptography. This paper presents a novel swarm based attack called Ant-Crypto (Ant-Cryptographer for the cryptanalysis of Data Encryption Standard (DES. Ant-Crypto is based on Binary Ant Colony Optimization (BACO i.e. a binary search space based directed graph is modeled for efficiently searching the optimum result (an original encryption key, in our case. The reason that why evolutionary techniques are becoming attractive is because of the inapplicability of traditional techniques and brute force attacks against feistel ciphers due to their inherent structure based on high nonlinearity and low autocorrelation. Ant-Crypto uses a known-plaintext attack to recover the secret key of DES which is required to break/ decipher the secret messages. Ant-Crypto iteratively searches for the secret key while generating several candidate optimum keys that are guessed across different runs on the basis of routes completed by ants. These optimum keys are then used to find each individual bit of the 56 bit secret key used during encryption by DES. Ant-Crypto is compared with some other state of the art evolutionary based attacks i.e. Genetic Algorithm and Comprehensive Binary Particle Swarm Optimization. The experimental results show that Ant-Crypto is an effective evolutionary attack against DES and can deduce large number of valuable bits as compared to other evolutionary algorithms; both in terms of time and space complexity.

  2. Cryptographic Key Management and Critical Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Robert K [ORNL

    2014-05-01

    The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system

  3. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    Science.gov (United States)

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can

  4. Scalable prediction of compound-protein interactions using minwise hashing.

    Science.gov (United States)

    Tabei, Yasuo; Yamanishi, Yoshihiro

    2013-01-01

    The identification of compound-protein interactions plays key roles in the drug development toward discovery of new drug leads and new therapeutic protein targets. There is therefore a strong incentive to develop new efficient methods for predicting compound-protein interactions on a genome-wide scale. In this paper we develop a novel chemogenomic method to make a scalable prediction of compound-protein interactions from heterogeneous biological data using minwise hashing. The proposed method mainly consists of two steps: 1) construction of new compact fingerprints for compound-protein pairs by an improved minwise hashing algorithm, and 2) application of a sparsity-induced classifier to the compact fingerprints. We test the proposed method on its ability to make a large-scale prediction of compound-protein interactions from compound substructure fingerprints and protein domain fingerprints, and show superior performance of the proposed method compared with the previous chemogenomic methods in terms of prediction accuracy, computational efficiency, and interpretability of the predictive model. All the previously developed methods are not computationally feasible for the full dataset consisting of about 200 millions of compound-protein pairs. The proposed method is expected to be useful for virtual screening of a huge number of compounds against many protein targets.

  5. Similarity Search and Locality Sensitive Hashing using TCAMs

    CERN Document Server

    Shinde, Rajendra; Gupta, Pankaj; Dutta, Debojyoti

    2010-01-01

    Similarity search methods are widely used as kernels in various machine learning applications. Nearest neighbor search (NNS) algorithms are often used to retrieve similar entries, given a query. While there exist efficient techniques for exact query lookup using hashing, similarity search using exact nearest neighbors is known to be a hard problem and in high dimensions, best known solutions offer little improvement over a linear scan. Fast solutions to the approximate NNS problem include Locality Sensitive Hashing (LSH) based techniques, which need storage polynomial in $n$ with exponent greater than $1$, and query time sublinear, but still polynomial in $n$, where $n$ is the size of the database. In this work we present a new technique of solving the approximate NNS problem in Euclidean space using a Ternary Content Addressable Memory (TCAM), which needs near linear space and has O(1) query time. In fact, this method also works around the best known lower bounds in the cell probe model for the query time us...

  6. Sequential Compact Code Learning for Unsupervised Image Hashing.

    Science.gov (United States)

    Liu, Li; Shao, Ling

    2016-12-01

    Effective hashing for large-scale image databases is a popular research area, attracting much attention in computer vision and visual information retrieval. Several recent methods attempt to learn either graph embedding or semantic coding for fast and accurate applications. In this paper, a novel unsupervised framework, termed evolutionary compact embedding (ECE), is introduced to automatically learn the task-specific binary hash codes. It can be regarded as an optimization algorithm that combines the genetic programming (GP) and a boosting trick. In our architecture, each bit of ECE is iteratively computed using a weak binary classification function, which is generated through GP evolving by jointly minimizing its empirical risk with the AdaBoost strategy on a training set. We address this as greedy optimization by embedding high-dimensional data points into a similarity-preserved Hamming space with a low dimension. We systematically evaluate ECE on two data sets, SIFT 1M and GIST 1M, showing the effectiveness and the accuracy of our method for a large-scale similarity search.

  7. Evaluating Locality Sensitive Hashing for Matching Partial Image Patches in a Social Media Setting

    Directory of Open Access Journals (Sweden)

    Shaun Bangay

    2014-01-01

    Full Text Available Images posted to a social media site can employ image completion techniques to efficiently and seamlessly remove sensitive content and safeguard privacy. Image completion algorithms typically employ a time consuming patch matching stage derived nearest neighbour search algorithms. Typical patch matching processes perform poorly in the social media context which performs once-off edits on a range of high resolution images with plentiful exemplar material.  We make use of hash tables to accelerate the matching stage. Our refinement is the development of a set of perceptually inspired hash functions that can exploit locality and provide a categorization consistent across any exemplar image. Descriptors derived from principal component analysis (PCA, after training on exemplar database, are used for comparison. Aggregation of descriptors improves accuracy and we adapt a probabilistic approach using randomly oriented hyperplanes to employ multiple descriptors in a single hash table.  Hash table strategies demonstrate a substantial improvement in performance over a brute force strategy, and perceptually inspired features provide levels of accuracy comparable with those trained on the data using PCA descriptors. The aggregation strategies further improve accuracy although measurement of this is confounded by non-uniform distribution of the aggregated keys. Evaluation with increasing levels of missing data demonstrates that the use of hashing continues to perform well relative to the Euclidean metric benchmark.  The patch matching process using aggregated perceptually inspired descriptors produces comparable results with substantial reduction in matching time when used for image completion in photographic images. While sensitivity to structural elements is identified as an issue, the complexity of the resulting process is well suited to bulk manipulation of high resolution images for use in social media.

  8. 基于有向性椭圆曲线的改进数字签密算法研究%Improved Digital Signcryption Algorithm Based on Cryptographic Elliptic Curve

    Institute of Scientific and Technical Information of China (English)

    陈画

    2013-01-01

    This article expounds basic information related to digital signature and digital signcryption. Based on the integrated analysis on various existing digital signature programs and signcryption programs, and utilizing the difficulty of oval curve discrete logarithmic problem, this paper proposes the improved digital signcryption algorithm on the basis of aeoplotropism ellipitc curve so as to meet the optimized target of reducing seeking inverse operation in the process of signcryption proof procedure. Aeoplotropism means only the specipient can make the signcryption. Experiments show that this improved algorithm has a significant effect of accelerating operation.%论文首先介绍了数字签名及数字签密的相关基础知识,在综合分析现存各类数字签名方案和签密方案的基础上,以减少签名验证过程中的求逆运算为优化目标,利用椭圆曲线离散对数问题的难解性,该文提出了一类基于有向性椭圆曲线的改进数字签密算法,有向性即只有指定的接收者才能解签密,通过实验证明,该改进算法能够加快运算速度,具有显著的优化效果.

  9. Code Specialization for Memory Efficient Hash Tries

    NARCIS (Netherlands)

    Steindorfer, M.; Vinju, J.J.

    2014-01-01

    The hash trie data structure is a common part in standard collection libraries of JVM programming languages such as Clojure and Scala. It enables fast immutable implementations of maps, sets, and vectors, but it requires considerably more memory than an equivalent array-based data structure. This hi

  10. Fortification of Transport Layer Security Protocol with Hashed Fingerprint Identity Parameter

    Directory of Open Access Journals (Sweden)

    Kuljeet Kaur

    2012-03-01

    Full Text Available Identity over the public links becomes quiet complex as Client and Server needs proper access rights with authentication. For determining clients identity with password Secured Shell Protocol or Public Key Infrastructure is deployed by various organizations. For end to end transport security SSL (Secured Socket Layer is the de facto standard having Record and Handshake protocol dealing with data integrity and data security respectively. It seems secure but many risks lurk in its use. So focus of the paper would be formulating the steps to be used for the enhancement of SSL. One more tier of security to the transport layer security protocol is added in this research paper by using fingerprints for identity authentication along with password for enhancement of SSL. Bio Hashing which will be done with the help of Minutiae Points at the fingerprints would be used for mutual authentication. New hash algorithm RNA-FINNT is generated in this research paper for converting minutiae points into hashed code. Value of hashed code would be stored at the Database in the Multi Server environment of an organization. Research paper will perform mutual authentication in the multi server environment of an organization with the use of fingerprint and password both as identity authentication parameters. This will strengthen record and handshake protocol which will enhance SSL and further enhancement of SSL will result in the fortification of Transport Layer Security Protocol.

  11. Broadcast authentication for wireless sensor networks using nested hashing and the Chinese remainder theorem.

    Science.gov (United States)

    Eldefrawy, Mohamed Hamdy; Khan, Muhammad Khurram; Alghathbar, Khaled; Cho, Eun-Suk

    2010-01-01

    Secure broadcasting is an essential feature for critical operations in wireless sensor network (WSNs). However, due to the limited resources of sensor networks, verifying the authenticity for broadcasted messages is a very difficult issue. μTESLA is a broadcast authentication protocol, which uses network-wide loose time synchronization with one-way hashed keys to provide the authenticity verification. However, it suffers from several flaws considering the delay tolerance, and the chain length restriction. In this paper, we propose a protocol which provides broadcast authentication for wireless sensor networks. This protocol uses a nested hash chain of two different hash functions and the Chinese Remainder Theorem (CRT). The two different nested hash functions are employed for the seed updating and the key generation. Each sensor node is challenged independently with a common broadcasting message using the CRT. Our algorithm provides forward and non-restricted key generation, and in addition, no time synchronization is required. Furthermore, receivers can instantly authenticate packets in real time. Moreover, the comprehensive analysis shows that this scheme is efficient and practical, and can achieve better performance than the μTESLA system.

  12. Broadcast Authentication for Wireless Sensor Networks Using Nested Hashing and the Chinese Remainder Theorem

    Directory of Open Access Journals (Sweden)

    Eun-Suk Cho

    2010-09-01

    Full Text Available Secure broadcasting is an essential feature for critical operations in wireless sensor network (WSNs. However, due to the limited resources of sensor networks, verifying the authenticity for broadcasted messages is a very difficult issue. μTESLA is a broadcast authentication protocol, which uses network-wide loose time synchronization with one-way hashed keys to provide the authenticity verification. However, it suffers from several flaws considering the delay tolerance, and the chain length restriction. In this paper, we propose a protocol which provides broadcast authentication for wireless sensor networks. This protocol uses a nested hash chain of two different hash functions and the Chinese Remainder Theorem (CRT. The two different nested hash functions are employed for the seed updating and the key generation. Each sensor node is challenged independently with a common broadcasting message using the CRT. Our algorithm provides forward and non-restricted key generation, and in addition, no time synchronization is required. Furthermore, receivers can instantly authenticate packets in real time. Moreover, the comprehensive analysis shows that this scheme is efficient and practical, and can achieve better performance than the μTESLA system.

  13. Cryptographic Trust Management Requirements Specification: Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Edgar, Thomas W.

    2009-09-30

    The Cryptographic Trust Management (CTM) Project is being developed for Department of Energy, OE-10 by the Pacific Northwest National Laboratory (PNNL). It is a component project of the NSTB Control Systems Security R&D Program.

  14. Cryptographic Technique Used Lower and Upper Triangular Decomposition Method

    Directory of Open Access Journals (Sweden)

    B. KumaraswamyAchary,

    2016-02-01

    Full Text Available In this paper, the main cryptographic technique we will use affine cipher used for encryption and also decryption by using one of the linear algebra technique lower and upper triangular technique

  15. Frame Interpolation Based on Visual Correspondence and Coherency Sensitive Hashing

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2013-01-01

    Full Text Available The technology of frame interpolation can be applied in intelligent monitoring systems to improve the quality of surveillance video. In this paper, a region-guided frame interpolation algorithm is proposed by introducing two innovative improvements. On the one hand, a detection approach is presented based on visual correspondence for detecting the motion regions that correspond to attracted objects in video sequences, which can narrow the prediction range of interpolated frames. On the other hand, spatial and temporal mapping rules are proposed using coherency sensitive hashing, which can obtain more accurate predicted values of interpolated pixels. Experiments show that the proposed method can achieve encouraging performance in terms of visual quality and quantitative measures.

  16. 基于Hash的YAFFS2文件各版本恢复算法研究%Research on Different Versions of YAFFS2 File Recovery Algorithm Based on Hash

    Institute of Scientific and Technical Information of China (English)

    李亚萌; 何泾沙

    2016-01-01

    In digital forensic, the technology of Android forensic becomes hot spot of research currently. And there are some research interests such as data extraction, data recovery for Android forensic. Among these research interests, data recovery is one of the most important step. YAFFS2 is a new lfash ifle system. It is designed for mobile devices which use NAND lfash and is widely used in Android devices. Thus, this paper proposes a method that recover different versions of YAFFS2 ifle based on Hash. Through extracting and storing the same object header information into Hash linked list, it can recover different versions of ifle. The experiment is executed under Linux system with YAFFS2 ifle system environment. And the experiment results show that the method can recover different types of ifle especially SQLite3 ifle and recover different versions of different types of ifle effectively. And this method lays the foundation for the follow-up research of Android forensic.%Android取证过程中,包含了数据提取、数据恢复等方向,其中数据恢复是Android取证研究中非常重要的一个环节,只有更好、更多地恢复终端中的数据,特别是一些被修改或者删除的数据,才能更好地开展对后续数据的分析和研究。YAFFS2文件系统是一种新型的快速闪存文件系统,该文件系统被设计用在使用NAND闪存技术的移动终端中,也是目前广泛应用于Android移动终端中的新型文件系统。文章选择YAFFS2文件系统作为研究对象,提出了一种基于Hash的YAFFS2文件各版本恢复算法。首先通过反向扫描获得该文件系统中的数据信息;然后将具有相同对象头信息的数据提取,并将其中的信息存入Hash链表中;最后重构文件以实现对多个版本的文件恢复。文章通过在Linux系统下搭建YAFFS2文件系统环境,并进行实验,证明了该算法可有效地对各类型数据文件进行恢复,特别是可

  17. A Class of Hash Functions Based on Quawigroups%一类基于拟群的Hash函数

    Institute of Scientific and Technical Information of China (English)

    池相会; 徐允庆

    2012-01-01

    Hash functions are encryption algorithms used in information security.In this paper,using the theory of finite field and residue class ring,a class of hash functions based on quasigroups is given,the analysis of the security is also presented.%基于Hash函数是用于信息安全领域中的加密算法,因此利用剩余类环和有限域理论给出一种基于拟群运算的具有良好抗碰撞性的Hash函数,并对其安全性作出分析.

  18. Fair Micropayment System Based on Hash Chains

    Institute of Scientific and Technical Information of China (English)

    YANG Zongkai; LANG Weimin; TAN Yunmeng

    2005-01-01

    Micropayment schemes usually do not provide fairness, which means that either the customer or the merchant, or both, can cheat each other and gain a financial advantage by misusing the protocols. This paper proposes an efficient hash chain-based micropayment scheme, which is an offline, prepaid scheme that supports simpler divisibility of digital coins. In the execution of payment protocol, the customer's disbursement and the merchant's submittal are performed step by step, whoever cannot gain addition profits even if he breaks off the transaction. The hash chain can also be used for transactions with different merchants. Unlike other micropayment schemes, e.g., PayWord, no public-key operation is required,which improves the efficiency. The scheme also provides restricted anonymity.

  19. Locality-sensitive Hashing without False Negatives

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    2016-01-01

    (n)/k, where n is the number of points in the data set and k ∊ N, and differs from it by at most a factor ln(4) in the exponent for general values of cr. As a consequence, LSH-based similarity search in Hamming space can avoid the problem of false negatives at little or no cost in efficiency. Read More: http......We consider a new construction of locality-sensitive hash functions for Hamming space that is covering in the sense that is it guaranteed to produce a collision for every pair of vectors within a given radius r. The construction is efficient in the sense that the expected number of hash collisions...

  20. Hash Functions and Information Theoretic Security

    Science.gov (United States)

    Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  1. Hash functions and information theoretic security

    DEFF Research Database (Denmark)

    Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid;

    2009-01-01

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...... attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant....

  2. Hashing in computer science fifty years of slicing and dicing

    CERN Document Server

    Konheim, Alan G

    2009-01-01

    Written by one of the developers of the technology, Hashing is both a historical document on the development of hashing and an analysis of the applications of hashing in a society increasingly concerned with security. The material in this book is based on courses taught by the author, and key points are reinforced in sample problems and an accompanying instructor s manual. Graduate students and researchers in mathematics, cryptography, and security will benefit from this overview of hashing and the complicated mathematics that it requires

  3. A MICROPAYMENT SCHEME BASED ON WEIGHTED MULTI DIMENSIONAL HASH CHAIN

    Institute of Scientific and Technical Information of China (English)

    Liu Yining; Hu Lei; Liu Heguo

    2006-01-01

    Hash chain and its generalization-Multi-Dimensional Hash Chain (MDHC) have been widely used in the design of micropayment due to its simplicity and efficiency. In this letter, a more efficient variant of MDHC, called WMDHC, which endows in the structure of MDHC a weight value for each hash value through a well-defined mapping, is proposed. The average hash operation number of WMDHC is log(2m/t), which is better than log(m) of MDHC when the parameter t is typically suggested as t = 7.

  4. Novel Duplicate Address Detection with Hash Function.

    Science.gov (United States)

    Song, GuangJia; Ji, ZhenZhou

    2016-01-01

    Duplicate address detection (DAD) is an important component of the address resolution protocol (ARP) and the neighbor discovery protocol (NDP). DAD determines whether an IP address is in conflict with other nodes. In traditional DAD, the target address to be detected is broadcast through the network, which provides convenience for malicious nodes to attack. A malicious node can send a spoofing reply to prevent the address configuration of a normal node, and thus, a denial-of-service attack is launched. This study proposes a hash method to hide the target address in DAD, which prevents an attack node from launching destination attacks. If the address of a normal node is identical to the detection address, then its hash value should be the same as the "Hash_64" field in the neighboring solicitation message. Consequently, DAD can be successfully completed. This process is called DAD-h. Simulation results indicate that address configuration using DAD-h has a considerably higher success rate when under attack compared with traditional DAD. Comparative analysis shows that DAD-h does not require third-party devices and considerable computing resources; it also provides a lightweight security resolution.

  5. Universally composable anonymous Hash certification model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; MA JianFeng; SangJae MOON

    2007-01-01

    Ideal function is the fundamental component in the universally composable security model. However, the certification ideal function defined in the universally composable security model realizes the identity authentication by binding identity to messages and the signature, which fails to characterize the special security requirements of anonymous authentication with other kind of certificate. Therefore,inspired by the work of Marten, an anonymous hash certification ideal function and a more universal certificate CA model are proposed in this paper. We define the security requirements and security notions for this model in the framework of universal composable security and prove in the plain model (not in the random-oracle model) that these security notions can be achieved using combinations of a secure digital signature scheme, a symmetrical encryption mechanism, a family of pseudorandom functions, and a family of one-way collision-free hash functions. Considering the limitation of wireless environment and computation ability of wireless devices, this anonymous Hash certification ideal function is realized by using symmetry primitives.

  6. Security Processing for High End Embedded System with Cryptographic Algorithms

    Directory of Open Access Journals (Sweden)

    M.Shankar

    2012-01-01

    Full Text Available This paper is intended to introduce embedded system designers and design tool developers to the challenges involved in designing secure embedded systems. The challenges unique to embedded systems require new approaches to security covering all aspects of embedded system design from architecture to implementation. Security processing, which refers to the computations that must be performed in a system for the purpose of security, can easily overwhelm thecomputational capabilities of processors in both low- and highendembedded systems. The paper also briefs on the security enforced in a device by the use of proprietary security technology and also discusses the security measures taken during the production of the device. We also survey solution techniques to address these challenges, drawing from both current practice and emerging esearch, and identify open research problems that will require innovations in embedded system architecture and design methodologies.

  7. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    Science.gov (United States)

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  8. Classifying sets of attributed scattering centers using a hash coded database

    Science.gov (United States)

    Dungan, Kerry E.; Potter, Lee C.

    2010-04-01

    We present a fast, scalable method to simultaneously register and classify vehicles in circular synthetic aperture radar imagery. The method is robust to clutter, occlusions, and partial matches. Images are represented as a set of attributed scattering centers that are mapped to local sets, which are invariant to rigid transformations. Similarity between local sets is measured using a method called pyramid match hashing, which applies a pyramid match kernel to compare sets and a Hamming distance to compare hash codes generated from those sets. By preprocessing a database into a Hamming space, we are able to quickly find the nearest neighbor of a query among a large number of records. To demonstrate the algorithm, we simulated X-band scattering from ten civilian vehicles placed throughout a large scene, varying elevation angles in the 35 to 59 degree range. We achieved better than 98 percent classification performance. We also classified seven vehicles in a 2006 public release data collection with 100% success.

  9. Side channel analysis of some hash based MACs:A response to SHA-3 requirements

    DEFF Research Database (Denmark)

    function which is more resistant to known side channel attacks (SCA) when plugged into HMAC, or that has an alternative MAC mode which is more resistant to known SCA than the other submitted alternatives. In response to this, we perform differential power analysis (DPA) on the possible smart card...... implementations of some of the recently proposed MAC alternatives to NMAC (a fully analyzed variant of HMAC) and HMAC algorithms and NMAC/HMAC versions of some recently proposed hash and compression function modes. We show that the recently proposed BNMAC and KMDP MAC schemes are even weaker than NMAC....../HMAC against the DPA attacks, whereas multi-lane NMAC, EMD MAC and the keyed wide-pipe hash have similar security to NMAC against the DPA attacks. Our DPA attacks do not work on the NMAC setting of MDC-2, Grindahl and MAME compression functions. This talk outlines our results....

  10. Improving Seek Time for Column Store Using MMH Algorithm

    CERN Document Server

    Apte, Tejaswini; Goyal, Dr A K

    2012-01-01

    Hash based search has, proven excellence on large data warehouses stored in column store. Data distribution has significant impact on hash based search. To reduce impact of data distribution, we have proposed Memory Managed Hash (MMH) algorithm that uses shift XOR group for Queries and Transactions in column store. Our experiments show that MMH improves read and write throughput by 22% for TPC-H distribution.

  11. Improved Collision Attack on Hash Function MD5

    Institute of Scientific and Technical Information of China (English)

    Jie Liang; Xue-Jia Lai

    2007-01-01

    In this paper, we present a fast attack algorithm to find two-block collision of hash function MD5.The algorithm is based on the two-block collision differential path of MD5 that was presented by Wang et al.In the Conference EUROCRYPT 2005.We found that the derived conditions for the desired collision differential path were not sufficient to guarantee the path to hold and that some conditions could be modified to enlarge the collision set.By using technique of small range searching and omitting the computing steps to check the characteristics in the attack algorithm, we can speed up the attack of MD5 efficiently.Compared with the Advanced Message Modification technique presented by Wang et al.,the small range searching technique can correct 4 more conditions for the first iteration differential and 3 more conditions for the second iteration differential, thus improving the probability and the complexity to find collisions.The whole attack on the MD5 can be accomplished within 5 hours using a PC with Pentium4 1.70GHz CPU.

  12. Matching of structural motifs using hashing on residue labels and geometric filtering for protein function prediction.

    Science.gov (United States)

    Moll, Mark; Kavraki, Lydia E

    2008-01-01

    There is an increasing number of proteins with known structure but unknown function. Determining their function would have a significant impact on understanding diseases and designing new therapeutics. However, experimental protein function determination is expensive and very time-consuming. Computational methods can facilitate function determination by identifying proteins that have high structural and chemical similarity. Our focus is on methods that determine binding site similarity. Although several such methods exist, it still remains a challenging problem to quickly find all functionally-related matches for structural motifs in large data sets with high specificity. In this context, a structural motif is a set of 3D points annotated with physicochemical information that characterize a molecular function. We propose a new method called LabelHash that creates hash tables of n-tuples of residues for a set of targets. Using these hash tables, we can quickly look up partial matches to a motif and expand those matches to complete matches. We show that by applying only very mild geometric constraints we can find statistically significant matches with extremely high specificity in very large data sets and for very general structural motifs. We demonstrate that our method requires a reasonable amount of storage when employing a simple geometric filter and further improves on the specificity of our previous work while maintaining very high sensitivity. Our algorithm is evaluated on 20 homolog classes and a non-redundant version of the Protein Data Bank as our background data set. We use cluster analysis to analyze why certain classes of homologs are more difficult to classify than others. The LabelHash algorithm is implemented on a web server at http://kavrakilab.org/labelhash/.

  13. ANALYSIS AND ESTIMATION OF THE TRIE MINIMUM LEVEL IN NON-HASH DEDUPLICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    M. A. Zhukov

    2015-05-01

    Full Text Available Subject of research. The paper deals with a method of restriction for the trie minimum level in non-hash data deduplication system. Method. The subject matter of the method lies in forcibly completing the trie to a specific minimum level. The proposed method makes it possible to increase performance of the process by reducing the number of collisions at the lower levels ofthe trie. The maximum theoretical performance growth corresponds to the share of collisions in the total number of data read operations from the storage medium. Proposed method application increases the metadata size to the amount of new structures containing one element. Main results. The results of the work have been proved by the data of computational experiment with non-has deduplication on 528 GB data set. The process analysis has shown that 99% of the execution time is taken to head positioning of hard-drives. The reason is a random distribution of the blocks on the storage medium. Application of the method of minimum level restriction for the trie in non-hash data deduplication system on the experimental data set gives the possibility to increase performance maximum by 16% and the increase of metadata size is 49%. The total amount of metadata is 34% less than with hash-based deduplication using the MD5 algorithm, and is 17% less than using Tiger192 algorithm. These results confirm the effectiveness of the proposed method. Practical relevance. The proposed method increases the performance of deduplication process by reducing the number of collisions in the trie construction. The results are of practical importance for professionals involved in the development of non-hash data deduplication methods.

  14. A New Approach in Cryptographic Systems Using Fractal Image Coding

    Directory of Open Access Journals (Sweden)

    Nadia M.G. Al-Saidi

    2009-01-01

    Full Text Available Problem statement: With the rapid development in the communications and information transmissions there is a growing demand for new approaches that increase the security of cryptographic systems. Approach: Therefore some emerging theories, such as fractals, can be adopted to provide a contribution toward this goal. In this study we proposed a new cryptographic system utilizing fractal theories; this approach exploited the main feature of fractals generated by IFS techniques. Results: Double enciphering and double deciphering methods performed to enhance the security of the system. The encrypted date represented the attractor generated by the IFS transformation, collage theorem was used to find the IFSM for decrypting data. Conclusion/Recommendations: The proposed method gave the possibility to hide maximum amount of data in an image that represent the attractor of the IFS without degrading its quality and to make the hidden data robust enough to withstand known cryptographic attacks and image processing techniques which did not change the appearance of image.

  15. Limits on the Power of Cryptographic Cheap Talk

    DEFF Research Database (Denmark)

    Hubacek, Pavel; Nielsen, Jesper Buus; Rosen, Alon

    2013-01-01

    We revisit the question of whether cryptographic protocols can replace correlated equilibria mediators in two-player strategic games. This problem was first addressed by Dodis, Halevi and Rabin (CRYPTO 2000), who suggested replacing the mediator with a secure protocol and proved that their solution...... is stable in the Nash equilibrium (NE) sense, provided that the players are computationally bounded. We show that there exist two-player games for which no cryptographic protocol can implement the mediator in a sequentially rational way; that is, without introducing empty threats. This explains why all...... and sufficient cryptographic assumptions for implementing a mediator that allows to achieve a given utility profile of a correlated equilibrium. The picture that emerges is somewhat different than the one arising in semi-honest secure two-party computation. Specifically, while in the latter case every...

  16. 9 CFR 319.303 - Corned beef hash.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corned beef hash. 319.303 Section 319... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a compact mass which is prepared with beef, potatoes, curing agents, seasonings, and any of the...

  17. Design and Analysis of Multivariate Hash Function%多变元Hash函数的构造与分析

    Institute of Scientific and Technical Information of China (English)

    王后珍; 张焕国; 杨飚

    2011-01-01

    本文在基于非线性多变元多项式方程组难解性的基础上,提出了一种新的Hash算法,新算法与目前广泛使用的Hash算法相比具有下列优点:安全性基于一个公认的数学难题;输出Hash值长度可变;引入了整体随机性,从一族Hash函数中随机选择Hash函数而不是随机化消息本身;设计自动化,用户可根据实际需求构造满足其特定要求的Hash函数.本文还详细讨论了新Hash算法的安全性、效率和性能,并通过仿真实验,指出了新算法的具体构造方法.实验结果表明,新算法在效率和性能方面与其它Hash函数具有可比性.%The novel Hash algorithm whose security is based on the difficult of multivariate polynomial equations over a finite field is designed and implemented. We propose the idea of building a secure hash using higher degree multivariate polynomials as the compression function of MPH. The new algorithm compared with the current widespread use of the Hash algorithms has the following advantages:Security based on a recognized difficult problem of mathematics;Hash length can be free to change,according to the needs of the user;Hash function as a whole is Randomly selected;Design automation,users can be constructed to meet the actual needs of the specific Hash function. We analyze some security properties and potential feasibility, where the compression functions are randomly chosen 3rd polynomials, the experiment results show that the new algorithm has good properties in the efficiency and performance, and is comparable with other Hash functions.

  18. 基于二叉树的反向Hash链遍历%Reverse Hash Chain Traversal Based on Binary Tree

    Institute of Scientific and Technical Information of China (English)

    傅建庆; 吴春明; 吴吉义; 平玲娣

    2012-01-01

    提出了一种反向Hash链遍历的时间、空间复杂度优化算法.采用堆栈操作实现了高效的反向Hash链遍历,并将Hash链遍历过程映射到了二叉树的后序遍历过程,利用二叉树性质对存储和计算性能进行了理论化分析和证明.分析证明结果表明,遍历对长为n的反向Hash链时,算法只需要存储「1bn」+1个节点值,并且进行不多于((「) 1b n」/2+1)n次Hash计算次数.相比同类其他算法,该算法并不要求链长为2的整数次方.通过对算法进行基于k叉树(k≥3)的扩展,进一步将存储空间降低到(「)logk[(k-1)n+1]」,但总计算次数提高到[(「)log[(k-1)n+1]」-1)k/2+1]n;通过在算法执行前先把Hash链平分为p段(P≥2),将总计算次数降低到([1b(n/p)]/2+1)n,但是所需的存储空间提高到([1b(n/p)]+1)p.%An algorithm improving the time and space complexity of reverse Hash chain traversal is proposed. By mapping the traversal of a reverse Hash chain to the postorder traversal of a binary tree, the proposed algorithm reduces the product of the total times of Hash operations and the storage space required to O(n(lb n)2), where n is the length of the reverse Hash chain. Analysis and proof using the property of the binary tree show that the proposed algorithm requires to save only [lbn] + l nodesat the same time, and needs no more than ([lb nj /2 + l)w times of Hash operations totally. Comparedwith other algorithms, the proposed one can be applied to Hash chains with any length, eliminating the limitation that the length of chain must be of 2 integer-th power. Then an advanced algorithm is proposed by mapping the traversal of a reverse Hash chain to the postorder traversal of a k-ary tree,where k is an integer no less than 3, and the space required is reduced to | logj[(jfe - Dn + l]|, but thetimes of Hash operations required is raised to [( | logk[(k- l)n + 1] | -l)k/2 + l]n. Finally, another advanced algorithm is proposed by splitting Hash chain

  19. Distributed hash table theory, platforms and applications

    CERN Document Server

    Zhang, Hao; Xie, Haiyong; Yu, Nenghai

    2013-01-01

    This SpringerBrief summarizes the development of Distributed Hash Table in both academic and industrial fields. It covers the main theory, platforms and applications of this key part in distributed systems and applications, especially in large-scale distributed environments. The authors teach the principles of several popular DHT platforms that can solve practical problems such as load balance, multiple replicas, consistency and latency. They also propose DHT-based applications including multicast, anycast, distributed file systems, search, storage, content delivery network, file sharing and c

  20. Side channel analysis of some hash based MACs: A response to SHA-3 requirements

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2008-01-01

    The forthcoming NIST's Advanced Hash Standard (AHS) competition to select SHA-3 hash function requires that each candidate hash function submission must have at least one construction to support FIPS 198 HMAC application. As part of its evaluation, NIST is aiming to select either a candidate hash...

  1. Using hardware-assisted geometric hashing for high-speed target acquisition and guidance

    Science.gov (United States)

    Pears, Arnold N.; Pissaloux, Edwige E.

    1997-06-01

    Geometric hashing provides a reliable and transformation independent representation of a target. The characterization of a target object is obtained by establishing a vector basis relative to a number of interest points unique to the target. The number of basis points required is a function of the dimensionality of the environment in which the technique is being used. This basis is used to encode the other points in the object constructing a highly general (transformation independent) representation of the target. The representation is invariant under both affine and geometric transformations of the target interest points. Once a representation of the target has been constructed a simple voting algorithm can be used to examine sets of interest points extracted from subsequent image in order to determine the possible presence and location of that target. Once an instance of the object has been located further computation can be undertaken to determine its scale, orientation, and deformation due to changes in the parameters related to the viewpoint. This information can be further analyzed to provide guidance. This paper discusses the complexity measures associated with task division and target image processing using geometric hashing. These measures are used to determine the areas which will most benefit from hardware assistance, and possible parallelism. These issues are discussed in the context of an architecture design, and a high speed (hardware assisted) geometric hashing approach to target recognition is proposed.

  2. Feasibility and Completeness of Cryptographic Tasks in the Quantum World

    NARCIS (Netherlands)

    Fehr, S.; Katz, J.; Song, F.; Zhou, H.S.; Zikas, V.; Sahai, A.

    2013-01-01

    It is known that cryptographic feasibility results can change by moving from the classical to the quantum world. With this in mind, we study the feasibility of realizing functionalities in the framework of universal composability, with respect to both computational and information-theoretic security

  3. Cryptographic Path Hardening: Hiding Vulnerabilities in Software through Cryptography

    CERN Document Server

    Ganesh, Vijay; Rinard, Martin

    2012-01-01

    We propose a novel approach to improving software security called Cryptographic Path Hardening, which is aimed at hiding security vulnerabilities in software from attackers through the use of provably secure and obfuscated cryptographic devices to harden paths in programs. By "harden" we mean that certain error-checking if-conditionals in a given program P are replaced by equivalent" we mean that adversaries cannot use semi-automatic program analysis techniques to reason about the hardened program paths and thus cannot discover as-yet-unknown errors along those paths, except perhaps through black-box dictionary attacks or random testing (which we can never prevent). Other than these unpreventable attack methods, we can make program analysis aimed at error-finding "provably hard" for a resource-bounded attacker, in the same sense that cryptographic schemes are hard to break. Unlike security-through-obscurity, in Cryptographic Path Hardening we use provably-secure crypto devices to hide errors and our mathemati...

  4. Cryptographic protocol verification using tractable classes of horn clauses

    DEFF Research Database (Denmark)

    Seidl, Helmut; Neeraj Verma, Kumar

    2007-01-01

    We consider secrecy problems for cryptographic protocols modeled using Horn clauses and present general classes of Horn clauses which can be efficiently decided. Besides simplifying the methods for the class of flat and onevariable clauses introduced for modeling of protocols with single blind...

  5. On fairness in simulatability-based cryptographic systems

    NARCIS (Netherlands)

    Backes, M.; Hofheinz, D.; Müller-Quade, J.; Unruh, D.

    2005-01-01

    Simulatability constitutes the cryptographic notion of a secure refinement and has asserted its position as one of the fundamental concepts of modern cryptography. Although simulatability carefully captures that a distributed protocol does not behave any worse than an ideal specification, it however

  6. Supervised hashing using graph cuts and boosted decision trees.

    Science.gov (United States)

    Lin, Guosheng; Shen, Chunhua; Hengel, Anton van den

    2015-11-01

    To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.

  7. Source Coding Using Families of Universal Hash Functions

    OpenAIRE

    Koga, Hiroki

    2007-01-01

    This correspondence is concerned with new connections between source coding and two kinds of families of hash functions known as the families of universal hash functions and N-strongly universal hash functions, where N ges 2 is an integer. First, it is pointed out that such families contain classes of well-known source codes such as bin codes and linear codes. Next, performance of a source coding scheme using either of the two kinds of families is evaluated. An upper bound on the expectation ...

  8. A new class of codes for Boolean masking of cryptographic computations

    CERN Document Server

    Carlet, Claude; Kim, Jon-Lark; Solé, Patrick

    2011-01-01

    We introduce a new class of rate one half binary codes: complementary information set codes. A binary linear code of length 2n and dimension n is called a complementary information set code (CIS code for short) if it has two disjoint information sets. This class of codes contains self-dual codes as a subclass. It is connected to graph correlation immune Boolean functions of use in the security of hardware implementations of cryptographic primitives. Such codes permit to improve the cost of masking cryptographic algorithms against side channel attacks. In this paper we investigate this new class of codes: we give optimal or best known CIS codes of length < 132. We derive general constructions based on cyclic codes and on double circulant codes. We derive a Varshamov-Gilbert bound for long CIS codes, and show that they can all be classified in small lengths \\leq 12 by the building up construction. Some nonlinear S-boxes are constructed by using Z4-codes, based on the notion of dual distance of an unrestricte...

  9. IMPLEMENTATION OF NEURAL - CRYPTOGRAPHIC SYSTEM USING FPGA

    Directory of Open Access Journals (Sweden)

    KARAM M. Z. OTHMAN

    2011-08-01

    Full Text Available Modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mail messages, credit card information, and corporate data. The design of the cryptography system is a conventional cryptography that uses one key for encryption and decryption process. The chosen cryptography algorithm is stream cipher algorithm that encrypt one bit at a time. The central problem in the stream-cipher cryptography is the difficulty of generating a long unpredictable sequence of binary signals from short and random key. Pseudo random number generators (PRNG have been widely used to construct this key sequence. The pseudo random number generator was designed using the Artificial Neural Networks (ANN. The Artificial Neural Networks (ANN providing the required nonlinearity properties that increases the randomness statistical properties of the pseudo random generator. The learning algorithm of this neural network is backpropagation learning algorithm. The learning process was done by software program in Matlab (software implementation to get the efficient weights. Then, the learned neural network was implemented using field programmable gate array (FPGA.

  10. 设计密码协议的若干原则与方法%SEVERAL PRINCIPLES AND METHODS FOR DESIGNING CRYPTOGRAPHIC PROTOCOLS

    Institute of Scientific and Technical Information of China (English)

    赵华伟; 刘月

    2011-01-01

    传统的密码协议设计主要考虑理想环境下运行的安全性.为了设计实用安全的密码协议,首先对理想环境下密码协议中存在的主要攻击进行研究和总结,提出四条协议设计原则,以避免常见的设计缺陷;然后通过对消息完整性的研究,提出一种协议转换算法,可将理想环境下安全的密码协议转换为现实环境下安全的密码协议,并证明算法的安全性.该转换算法的提出,有助于设计在现实环境下运行安全的密码协议.%Traditional cryptographic protocol design focuses mainly on the security issue in a perfect environment. In order to design practical and secure cryptographic protocols, at first the primary attacks upon cryptographic protocols in a perfect environment is studied and summarized, so that four principles for protocol design is proposed in order to avoid common design defects; next, through studying message integrity, a protocol transformation algorithm is proposed which can turn secure cryptographic protocols in a perfect environment into ones in a real environment; the security of the algorithm is validated. The proposal of the transformation algorithm is helpful to design a secure cryptographic protocol to operate in a real environment.

  11. 基于神经网络模型的双混沌 Hash 函数构造%A Dual Chaotic Hash Function Based on Cellular Neural Network

    Institute of Scientific and Technical Information of China (English)

    刘慧; 赵耿; 白健

    2014-01-01

    高效快速的单向Hash函数是当前安全技术研究的热点。文章采用神经网络结构构造了一种Hash函数,由Logistic映射和Chebyshev映射结合起来的双混沌系统产生该神经网络的参数,将明文信息逐块进行处理,并最终通过异或产生128 bit的Hash值。经实验数据和仿真分析可知:文章提出的方案满足单向Hash函数所要求的混乱和置换特性,并且具有很好的弱碰撞性和初值敏感性;另外,该方案结构简单容易实现。%The Hash function with high speed and efifciency has been a hotspot of security. In this paper, a new Hash function based on cellular neural network was proposed. The parameters of the cellular neural network were produced by a unique system which combined the Logistic map with the Chebyshev map. The function can handle the plaintext by the block, and the ifnal 128 Hash value is the xor of every block’s Hash value. The experimental data and simulated analysis show that the proposed algorithm can satisfy the requirements of a secure hash function, and it has some good properties such as diffusion, confusion, weak collision and sensitivity to initial conditions. What’s more, the construction of the scheme can be achieved easily.

  12. Distributed Hash Tables: Design and Applications

    Science.gov (United States)

    Chan, C.-F. Michael; Chan, S.-H. Gary

    The tremendous growth of the Internet and large-scale applications such as file sharing and multimedia streaming require the support of efficient search on objects. Peer-to-peer approaches have been proposed to provide this search mechanism scalably. One such approach is the distributed hash table (DHT), a scalable, efficient, robust and self-organizing routing overlay suitable for Internet-size deployment. In this chapter, we discuss how scalable routing is achieved under node dynamics in DHTs. We also present several applications which illustrate the power of DHTs in enabling large-scale peer-to-peer applications. Since wireless networks are becoming increasingly popular, we also discuss the issues of deploying DHTs and various solutions in such networks.

  13. Chameleon Hashes Without Key Exposure Based on Factoring

    Institute of Scientific and Technical Information of China (English)

    Wei Gao; Xue-Li Wang; Dong-Qing Xie

    2007-01-01

    Chameleon hash is the main primitive to construct a chameleon signature scheme which provides non-repudiation and non-transferability simultaneously.However, the initial chameleon hash schemes suffer from the key exposure problem: non-transferability is based on an unsound assumption that the designated receiver is willing to abuse his private key regardless of its exposure.Recently, several key-exposure-free chameleon hashes have been constructed based on RSA assumption and SDH (strong Diffie-Hellman) assumption.In this paper, we propose a factoring-based chameleon hash scheme which is proven to enjoy all advantages of the previous schemes.In order to support it, we propose a variant Rabin signature scheme which is proven secure against a new type of attack in the random oracle model.

  14. Exploring Butane Hash Oil Use: A Research Note.

    Science.gov (United States)

    Miller, Bryan Lee; Stogner, John M; Miller, J Mitchell

    2016-01-01

    The practice of "dabbing" has seen an apparent upswing in popularity in recent months within American drug subcultures. "Dabbing" refers to the use of butane-extracted marijuana products that offer users much higher tetrahydrocannabinol content than flower cannabis through a single dosage process. Though considerably more potent than most marijuana strains in their traditional form, these butane hash oil products and the practice of dabbing are underexplored in the empirical literature, especially in prohibition states. A mixed-methods evaluation of a federally funded treatment program for drug-involved offenders identified a small sample (n = 6) of butane hash oil users and generated focus group interview data on the nature of butane hash oil, the practice of dabbing, and its effects. Findings inform discussion of additional research needed on butane hash oil and its implications for the ongoing marijuana legalization debate, including the diversity of users, routes of administration, and differences between retail/medical and prohibition states.

  15. Perceptual hashing of sheet music based on graphical representation

    Science.gov (United States)

    Kremser, Gert; Schmucker, Martin

    2006-02-01

    For the protection of Intellectual Property Rights (IPR), different passive protection methods have been developed. These watermarking and fingerprinting technologies protect content beyond access control and thus tracing illegal distributions as well as the identification of people who are responsible for a illegal distribution is possible. The public's attention was attracted especially to the second application by the illegal distribution of the so called 'Hollywood screeners'. The focus of current research is on audio and video content and images. These are the common content types we are faced with every day, and which mostly have a huge commercial value. Especially the illegal distribution of content that has not been officially published shows the potential commercial impact of illegal distributions. Content types, however, are not limited to audio, video and images. There is a range of other content types, which also deserve the development of passive protection technologies. For sheet music for instance, different watermarking technologies have been developed, which up to this point only function within certain limitations. This is the reason why we wanted to find out how to develop a fingerprinting or perceptual hashing method for sheet music. In this article, we describe the development of our algorithm for sheet music, which is based on simple graphical features. We describe the selection of these features and the subsequent processing steps. The resulting compact representation is analyzed and the first performance results are reported.

  16. On another two cryptographic identities in universal Osborn loops

    Directory of Open Access Journals (Sweden)

    T. G. Jaiyéolá

    2010-03-01

    Full Text Available In this study, by establishing an identity for universal Osborn loops, two other identities (of degrees 4 and 6 are deduced from it and they are recognized and recommended for cryptography in a similar spirit in which the cross inverse property (of degree 2 has been used by Keedwell following the fact that it was observed that universal Osborn loops that do not have the 3-power associative property or weaker forms of; inverse property, power associativity and diassociativity to mention a few, will have cycles (even long ones. These identities are found to be cryptographic in nature for universal Osborn loops and thereby called cryptographic identities. They were also found applicable to security patterns, arrangements and networks which the CIP may not be applicable to.

  17. DATA INTEGRITY IN THE AUTOMATED SYSTEM BASED ON LINEAR SYSTEMS HASHES

    Directory of Open Access Journals (Sweden)

    Savin S. V.

    2015-12-01

    Full Text Available To protect your data (data integrity in the automated systems, we provide a solution of the problem, which is to reduce redundancy control of information (hash codes, electronic signatures. We impose restrictions on the maximum number of violations of the integrity of the records in the data block. It is known, that with an increase in data protection the amount of control information (coefficient of redundancy also increases. We introduce the concept of linear systems of hash codes (LSHC. On the basis of the mathematical apparatus of the theory of systems of vectors we have developed an algorithm for constructing LSHC, which allows (for a given level of data protection, i.e. integrity to reduce the redundancy of the control information. Rules (principles of building LSHC comply with the rules of construction in coding theory (Hamming codes. The article provides an algorithm for data integrity in LSHC. The use of algorithms ensures the necessary level of data protection and the requirements specification of customers

  18. Apparatus, system and method for providing cryptographic key information with physically unclonable function circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Areno, Matthew

    2015-12-08

    Techniques and mechanisms for providing a value from physically unclonable function (PUF) circuitry for a cryptographic operation of a security module. In an embodiment, a cryptographic engine receives a value from PUF circuitry and based on the value, outputs a result of a cryptographic operation to a bus of the security module. The bus couples the cryptographic engine to control logic or interface logic of the security module. In another embodiment, the value is provided to the cryptographic engine from the PUF circuitry via a signal line which is distinct from the bus, where any exchange of the value by either of the cryptographic engine and the PUF circuitry is for communication of the first value independent of the bus.

  19. A New Hybrid Algorithm for Association Rule Mining

    Institute of Scientific and Technical Information of China (English)

    ZHANG Min-cong; YAN Cun-liang; ZHU Kai-yu

    2007-01-01

    HA (hashing array), a new algorithm, for mining frequent itemsets of large database is proposed. It employs a structure hash array, ItemArray ( ) to store the information of database and then uses it instead of database in later iteration. By this improvement, only twice scanning of the whole database is necessary, thereby the computational cost can be reduced significantly. To overcome the performance bottleneck of frequent 2-itemsets mining, a modified algorithm of HA, DHA (direct-addressing hashing and array) is proposed, which combines HA with direct-addressing hashing technique. The new hybrid algorithm, DHA, not only overcomes the performance bottleneck but also inherits the advantages of HA. Extensive simulations are conducted in this paper to evaluate the performance of the proposed new algorithm, and the results prove the new algorithm is more efficient and reasonable.

  20. Maximum Bipartite Matching Size And Application to Cuckoo Hashing

    CERN Document Server

    Kanizo, Yossi; Keslassy, Isaac

    2010-01-01

    Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...

  1. New Results on Hashing, Labeling Schemes and Algorithms

    DEFF Research Database (Denmark)

    Knudsen, Mathias Bæk Tejs

    (n) nodes for the family of forests on n nodes. We hereby solve an open problem being raised repeatedly over decades, e.g. in Kannan, Naor, Rudich [STOC'88], Chung [J. of Graph Theory'90], Fraigniaud and Korman [SODA'10]. Joint work with Alstrup and Dahlgaard [FOCS'15]. Even cycle detection: For any...

  2. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  3. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications.

    Science.gov (United States)

    Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik

    2016-09-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA(2)) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA(2) based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA(2) based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA(2) based S-Box have comparatively better performance than that of conventional LUT based S-Box.

  4. 单向Hash函数SHA-256的研究与改进%One-way Hash function research and improved SHA-256

    Institute of Scientific and Technical Information of China (English)

    何润民

    2013-01-01

    This paper focuses on the study of the Hash SHA-256 algorithm,analyzes the logic and the compression function of the SHA-256 algorithm.On the basis of the study,it designs an improved Hash function SHA-256,using VC ++ development tools,completed the software implementation.It verifies the improved Hash function SHA-256 has better nonlinearity,one-way,collision resistance,randommess and avalanche effect by the theoretical analysis,realization of software for the string of text file Hash and comparison of the calculation results.%对Hash函数SHA-256进行了研究,分析了SHA-256的算法逻辑,以及它所采用的压缩函数的构造,在此基础上研究设计了一个改进的Hash函数SHA-256,应用VC++开发工具对改进的Hash函数SHA-256完成了软件实现.利用理论分析和实现软件对字符串、文本文件进行Hash计算结果的比较,结果证实改进的Hash函数具有更好的非线性性、单向性、抗碰撞性、伪随机性和雪崩效应.

  5. Asymmetric Unification: A New Unification Paradigm for Cryptographic Protocol Analysis

    OpenAIRE

    2013-01-01

    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38574-2_16 We present a new paradigm for unification arising out of a technique commonly used in cryptographic protocol analysis tools that employ unification modulo equational theories. This paradigm relies on: (i) a decomposition of an equational theory into (R,E) where R is confluent, terminating, and coherent modulo E, and (ii) on reducing unification problems to a set of problems s=?ts=?t under the ...

  6. Cryptographically Blinded Games: Leveraging Players' Limitations for Equilibria and Profit

    DEFF Research Database (Denmark)

    Hubacek, Pavel; Park, Sunoo

    2014-01-01

    In this work we apply methods from cryptography to enable mutually distrusting players to implement broad classes of mediated equilibria of strategic games without trusted mediation. Our implementation uses a pre-play 'cheap talk' phase, consisting of non- binding communication between players pr...... prior to play in the original game. In the cheap talk phase, the players run a secure multi-party computation protocol to sample from an equilibrium of a "cryptographically blinded" version of the game, in which actions are encrypted...

  7. XML Data Integrity Based on Concatenated Hash Function

    CERN Document Server

    Liu, Baolong; Yip, Jim

    2009-01-01

    Data integrity is the fundamental for data authentication. A major problem for XML data authentication is that signed XML data can be copied to another document but still keep signature valid. This is caused by XML data integrity protecting. Through investigation, the paper discovered that besides data content integrity, XML data integrity should also protect element location information, and context referential integrity under fine-grained security situation. The aim of this paper is to propose a model for XML data integrity considering XML data features. The paper presents an XML data integrity model named as CSR (content integrity, structure integrity, context referential integrity) based on a concatenated hash function. XML data content integrity is ensured using an iterative hash process, structure integrity is protected by hashing an absolute path string from root node, and context referential integrity is ensured by protecting context-related elements. Presented XML data integrity model can satisfy int...

  8. Perceptual image hashing based on virtual watermark detection.

    Science.gov (United States)

    Khelifi, Fouad; Jiang, Jianmin

    2010-04-01

    This paper proposes a new robust and secure perceptual image hashing technique based on virtual watermark detection. The idea is justified by the fact that the watermark detector responds similarly to perceptually close images using a non embedded watermark. The hash values are extracted in binary form with a perfect control over the probability distribution of the hash bits. Moreover, a key is used to generate pseudo-random noise whose real values contribute to the randomness of the feature vector with a significantly increased uncertainty of the adversary, measured by mutual information, in comparison with linear correlation. Experimentally, the proposed technique has been shown to outperform related state-of-the art techniques recently proposed in the literature in terms of robustness with respect to image processing manipulations and geometric attacks.

  9. Multiple hashes of single key with passcode for multiple accounts

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A human's e-life needs multiple offline and online accounts. It is a balance between usability and security to set keys or passwords for these multiple accounts. Password reuse has to be avoided due to the domino effect of malicious administrators and crackers. However, human memorability constrains the number of keys. Single sign-on server, key hashing, key strengthening and petname system are used in the prior arts to use only one key for multiple online accounts. The unique site keys are derived from the common master secret and specific domain name. These methods cannot be applied to offline accounts such as file encryption. We invent a new method and system applicable to offline and online accounts. It does not depend on HTTP server and domain name, but numeric 4-digit passcode, key hashing, key strengthening and hash truncation. Domain name is only needed to resist spoofing and phishing attacks of online accounts.

  10. Hash function construction using weighted complex dynamical networks

    Institute of Scientific and Technical Information of China (English)

    Song Yu-Rong; Jiang Guo-Ping

    2013-01-01

    A novel scheme to construct a hash function based on a weighted complex dynamical network (WCDN) generated from an original message is proposed in this paper.First,the original message is divided into blocks.Then,each block is divided into components,and the nodes and weighted edges are well defined from these components and their relations.Namely,the WCDN closely related to the original message is established.Furthermore,the node dynamics of the WCDN are chosen as a chaotic map.After chaotic iterations,quantization and exclusive-or operations,the fixed-length hash value is obtained.This scheme has the property that any tiny change in message can be diffused rapidly through the WCDN,leading to very different hash values.Analysis and simulation show that the scheme possesses good statistical properties,excellent confusion and diffusion,strong collision resistance and high efficiency.

  11. Almost Universal Hash Families are also Storage Enforcing

    CERN Document Server

    Husain, Mohammad Iftekhar; Rudra, Atri; Uurtamo, Steve

    2012-01-01

    We show that every almost universal hash function also has the storage enforcement property. Almost universal hash functions have found numerous applications and we show that this new storage enforcement property allows the application of almost universal hash functions in a wide range of remote verification tasks: (i) Proof of Secure Erasure (where we want to remotely erase and securely update the code of a compromised machine with memory-bounded adversary), (ii) Proof of Ownership (where a storage server wants to check if a client has the data it claims to have before giving access to deduplicated data) and (iii) Data possession (where the client wants to verify whether the remote storage server is storing its data). Specifically, storage enforcement guarantee in the classical data possession problem removes any practical incentive for the storage server to cheat the client by saving on storage space. The proof of our result relies on a natural combination of Kolmogorov Complexity and List Decoding. To the ...

  12. An improved three party authenticated key exchange protocol using hash function and elliptic curve cryptography for mobile-commerce environments

    Directory of Open Access Journals (Sweden)

    S.K. Hafizul Islam

    2017-07-01

    Full Text Available In the literature, many three-party authenticated key exchange (3PAKE protocols are put forwarded to established a secure session key between two users with the help of trusted server. The computed session key will ensure secure message exchange between the users over any insecure communication networks. In this paper, we identified some deficiencies in Tan’s 3PAKE protocol and then devised an improved 3PAKE protocol without symmetric key en/decryption technique for mobile-commerce environments. The proposed protocol is based on the elliptic curve cryptography and one-way cryptographic hash function. In order to prove security validation of the proposed 3PAKE protocol we have used widely accepted AVISPA software whose results confirm that the proposed protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The proposed protocol is not only secure in the AVISPA software, but it also secure against relevant numerous security attacks such as man-in-the-middle attack, impersonation attack, parallel attack, key-compromise impersonation attack, etc. In addition, our protocol is designed with lower computation cost than other relevant protocols. Therefore, the proposed protocol is more efficient and suitable for practical use than other protocols in mobile-commerce environments.

  13. Secure management of biomedical data with cryptographic hardware.

    Science.gov (United States)

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2012-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations.

  14. Some Attacks On Quantum-based Cryptographic Protocols

    CERN Document Server

    Lo, H K; Lo, Hoi-Kwong; Ko, Tsz-Mei

    2003-01-01

    Quantum-based cryptographic protocols are often said to enjoy security guaranteed by the fundamental laws of physics. However, subtle attacks that are outside the original design of the protocols may allow eavesdroppers to break those protocols. As an example, we will give a peeking attack against a quantum key distribution scheme based on quantum memory. Moreover, if technological assumptions are made in the design of a quantum-based cryptographic protocol, then the actual security of the protocol may fall short of expectations. This is because it is often hard to quantify those technological assumptions in a precise manner. Here, we give an attack against a recently proposed ``secure communication using coherent state scheme''. Our attack requires only beamsplitters and the replacement of a lossy channel by a lossless one. It is successful provided that the original loss in the channel is so big that Eve can obtain 2^k copies of what Bob receives, where k is the length of the seed key pre-shared by Alice an...

  15. Secured Cryptographic Key Generation From Multimodal Biometrics Feature Level Fusion Of Fingerprint And Iris

    CERN Document Server

    Jagadeesan, A

    2010-01-01

    Human users have a tough time remembering long cryptographic keys. Hence, researchers, for so long, have been examining ways to utilize biometric features of the user instead of a memorable password or passphrase, in an effort to generate strong and repeatable cryptographic keys. Our objective is to incorporate the volatility of the users biometric features into the generated key, so as to make the key unguessable to an attacker lacking significant knowledge of the users biometrics. We go one step further trying to incorporate multiple biometric modalities into cryptographic key generation so as to provide better security. In this article, we propose an efficient approach based on multimodal biometrics (Iris and fingerprint) for generation of secure cryptographic key. The proposed approach is composed of three modules namely, 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. Initially, the features, minutiae points and texture properties are extracted from...

  16. Secured Cryptographic Key Generation From Multimodal Biometrics: Feature Level Fusion of Fingerprint and Iris

    CERN Document Server

    Jagadeesan, A

    2010-01-01

    Human users have a tough time remembering long cryptographic keys. Hence, researchers, for so long, have been examining ways to utilize biometric features of the user instead of a memorable password or passphrase, in an effort to generate strong and repeatable cryptographic keys. Our objective is to incorporate the volatility of the user's biometric features into the generated key, so as to make the key unguessable to an attacker lacking significant knowledge of the user's biometrics. We go one step further trying to incorporate multiple biometric modalities into cryptographic key generation so as to provide better security. In this article, we propose an efficient approach based on multimodal biometrics (Iris and fingerprint) for generation of secure cryptographic key. The proposed approach is composed of three modules namely, 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. Initially, the features, minutiae points and texture properties are extracted fr...

  17. Provable Data Possession Scheme based on Homomorphic Hash Function in Cloud Storage

    Directory of Open Access Journals (Sweden)

    Li Yu

    2016-01-01

    Full Text Available Cloud storage can satisfy the demand of accessing data at anytime, anyplace. In cloud storage, only when the users can verify that the cloud storage server possesses the data correctly, users shall feel relax to use cloud storage. Provable data possession(PDP makes it easy for a third party to verify whether the data is integrity in the cloud storage server. We analyze the existing PDP schemes, find that these schemes have some drawbacks, such as computationally expensive, only performing a limited number provable data possession. This paper proposes a provable data possession scheme based on homomorphic hash function according to the problems exist in the existing algorithms. The advantage of homomorphic hash function is that it provides provable data possession and data integrity protection. The scheme is a good way to ensure the integrity of remote data and reduce redundant storage space and bandwidth consumption on the premise that users do not retrieve data. The main cost of the scheme is in the server side, it is suitable for mobile devices in the cloud storage environments. We prove that the scheme is feasible by analyzing the security and performance of the scheme.

  18. Hashing hyperplane queries to near points with applications to large-scale active learning.

    Science.gov (United States)

    Vijayanarasimhan, Sudheendra; Jain, Prateek; Grauman, Kristen

    2014-02-01

    We consider the problem of retrieving the database points nearest to a given hyperplane query without exhaustively scanning the entire database. For this problem, we propose two hashing-based solutions. Our first approach maps the data to 2-bit binary keys that are locality sensitive for the angle between the hyperplane normal and a database point. Our second approach embeds the data into a vector space where the euclidean norm reflects the desired distance between the original points and hyperplane query. Both use hashing to retrieve near points in sublinear time. Our first method's preprocessing stage is more efficient, while the second has stronger accuracy guarantees. We apply both to pool-based active learning: Taking the current hyperplane classifier as a query, our algorithm identifies those points (approximately) satisfying the well-known minimal distance-to-hyperplane selection criterion. We empirically demonstrate our methods' tradeoffs and show that they make it practical to perform active selection with millions of unlabeled points.

  19. Two-factor authentication system based on optical interference and one-way hash function

    Science.gov (United States)

    He, Wenqi; Peng, Xiang; Meng, Xiangfeng; Liu, Xiaoli

    2012-10-01

    We present a two-factor authentication method to verify the personal identification who tries to access an optoelectronic system. This method is based on the optical interference principle and the traditional one-way Hash function (e.g. MD5). The authentication process is straightforward, the phase key and the password-controlled phase lock of one user are loading on two Spatial Light Modulators (SLMs) in advance, by which two coherent beams are modulated and then interference with each other at the output plane leading to an output image. By comparing the output image with all the standard certification images in the database, the system can thus verify the user's identity. However, the system designing process involves an iterative Modified Phase Retrieval Algorithm (MPRA). For an uthorized user, a phase lock is first created based on a "Digital Fingerprint (DF)", which is the result of a Hash function on a preselected user password. The corresponding phase key can then be determined by use of the phase lock and a designated standard certification image. Note that the encode/design process can only be realized by digital means while the authentication process could be achieved digitally or optically. Computer simulations were also given to validate the proposed approach.

  20. On Authentication with HMAC and Non-random Properties

    Science.gov (United States)

    Rechberger, Christian; Rijmen, Vincent

    MAC algorithms can provide cryptographically secure authentication services. One of the most popular algorithms in commercial applications is HMAC based on the hash functions MD5 or SHA-1. In the light of new collision search methods for members of the MD4 family including SHA-1, the security of HMAC based on these hash functions is reconsidered.

  1. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    Dijck, van Harry; Heijden, van der Ferdinand

    2003-01-01

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the p

  2. Hash function based on the generalized Henon map

    Institute of Scientific and Technical Information of China (English)

    Zheng Fan; Tian Xiao-Jian; Li Xue-Yan; Wu Bin

    2008-01-01

    A new Hash function based on the generalized Henon map is proposed.We have obtained a binary sequence with excellent pseudo-random characteristics through improving the sequence generated by the generalized Henon map,and use it to construct Hash function.First we divide the message into groups,and then carry out the Xor operation between the ASCII value of each group and the binary sequence,the result can be used as the initial values of the next loop.Repeat the procedure until all the groups have been processed,and the final binary sequence is the Hash value.In the scheme,the initial values of the generalized Henon map are used as the secret key and the messages are mapped to Hash values with a designated length.Simulation results show that the proposed scheme has strong diffusion and confusion capability,good collision resistance,large key space,extreme sensitivity to message and secret key,and it is easy to be realized and extended.

  3. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the

  4. Formalizing the Relationship Between Commitment and Basic Cryptographic Primitives

    Directory of Open Access Journals (Sweden)

    S. Sree Vivek

    2016-11-01

    Full Text Available Signcryption is a cryptographic primitive which offers the functionality of both digital signature and encryption with lower combined computational cost. On the other hand, commitment scheme allows an entity to commit to a value, where the entity reveals the committed value later during a decommit phase. In this paper, we explore the connection between commitment schemes, public key encryption, digital signatures and signcryption. We establish formal relationship between commitment and the other primitives. Our main result is that we show signcryption can be used as a commitment scheme with appropriate security notions. We show that if the underlying signcryption scheme is IND-CCA2 secure, then the hiding property of the commitment scheme is satisfied. Similarly, we show that if the underlying signcryption scheme is unforgeable, then the relaxed biding property of the commitment scheme is satisfied. Moreover, we prove that if the underlying signcryption scheme is NM-CCA2, then the commitment scheme is non-malleable.

  5. A Robust Hash Function Using Cross-Coupled Chaotic Maps with Absolute-Valued Sinusoidal Nonlinearity

    Directory of Open Access Journals (Sweden)

    Wimol San-Um

    2016-01-01

    Full Text Available This paper presents a compact and effective chaos-based keyed hash function implemented by a cross-coupled topology of chaotic maps, which employs absolute-value of sinusoidal nonlinearity, and offers robust chaotic regions over broad parameter spaces with high degree of randomness through chaoticity measurements using the Lyapunov exponent. Hash function operations involve an initial stage when the chaotic map accepts initial conditions and a hashing stage that accepts input messages and generates the alterable-length hash values. Hashing performances are evaluated in terms of original message condition changes, statistical analyses, and collision analyses. The results of hashing performances show that the mean changed probabilities are very close to 50%, and the mean number of bit changes is also close to a half of hash value lengths. The collision tests reveal the mean absolute difference of each character values for the hash values of 128, 160 and 256 bits are close to the ideal value of 85.43. The proposed keyed hash function enhances the collision resistance, comparing to MD5 and SHA1, and the other complicated chaos-based approaches. An implementation of hash function Android application is demonstrated.

  6. Efficient Query-by-Content Audio Retrieval by Locality Sensitive Hashing and Partial Sequence Comparison

    Science.gov (United States)

    Yu, Yi; Joe, Kazuki; Downie, J. Stephen

    This paper investigates suitable indexing techniques to enable efficient content-based audio retrieval in large acoustic databases. To make an index-based retrieval mechanism applicable to audio content, we investigate the design of Locality Sensitive Hashing (LSH) and the partial sequence comparison. We propose a fast and efficient audio retrieval framework of query-by-content and develop an audio retrieval system. Based on this framework, four different audio retrieval schemes, LSH-Dynamic Programming (DP), LSH-Sparse DP (SDP), Exact Euclidian LSH (E2LSH)-DP, E2LSH-SDP, are introduced and evaluated in order to better understand the performance of audio retrieval algorithms. The experimental results indicate that compared with the traditional DP and the other three compititive schemes, E2LSH-SDP exhibits the best tradeoff in terms of the response time, retrieval accuracy and computation cost.

  7. A Contribution to Secure the Routing Protocol "Greedy Perimeter Stateless Routing" Using a Symmetric Signature-Based AES and MD5 Hash

    CERN Document Server

    Erritali, Mohammed; Ouahidi, Bouabid El; 10.5121/ijdps.2011.2509

    2011-01-01

    This work presents a contribution to secure the routing protocol GPSR (Greedy Perimeter Stateless Routing) for vehicular ad hoc networks, we examine the possible attacks against GPSR and security solutions proposed by different research teams working on ad hoc network security. Then, we propose a solution to secure GPSR packet by adding a digital signature based on symmetric cryptography generated using the AES algorithm and the MD5 hash function more suited to a mobile environment.

  8. A Novel Steganographic Scheme Based on Hash Function Coupled With AES Encryption

    Directory of Open Access Journals (Sweden)

    Rinu Tresa M J

    2014-04-01

    Full Text Available In the present scenario the use of images increased extremely in the cyber world so that we can easily transfer data with the help of these images in a secured way. Image steganography becomes important in this manner. Steganography and cryptography are the two techniques that are often confused with each other. The input and output of steganogra phy looks alike, but for cryptography the output will be in an encrypted form which always draws attraction to the attacker. This paper combines both steganography and cryptography so that attacker doesn’t know about the existence of message and the message itself is encrypted to ensure more security. The textual data entered by the user is encrypted using AES algorithm. After encryption, the encrypted data is stored in the colour image by using a hash based algorithm. Most of the steganographic algorithms available today is suitable for a specific image format and these algorithms suffers from poor quality of the embedded image. The proposed work does not corrupt the images quality in any form. The striking feature is that this algorithm is suitable for almost all image formats e.g.: jpeg/jpg, Bitmap, TIFF and GIF

  9. Performance Analysis of Image Content Identification on Perceptual Hashing%基于感知哈希的图像内容鉴别性能分析

    Institute of Scientific and Technical Information of China (English)

    潘辉; 郑刚; 胡晓惠; 马恒太

    2012-01-01

    感知哈希能有效地区分不同内容的图像,常用于检测盗版或重复的图像内容鉴别应用中.针对已有的对感知哈希的图像内容鉴别应用研究主要集中在算法设计上,缺少对其性能评价的理论方法的问题,提出从理论上评价内容鉴别性能的方法.该方法基于一类图像感知哈希算法的抗碰撞特性建立了内容鉴别的判定模型,并定义内容鉴别的性能公式,最后推导出统计学上的函数作为性能评价指标.实验结果表明,文中方法能很好地估计感知哈希在内容鉴别应用中的性能.%Perceptual hashing can distinguish images of different contents effectively, and it is commonly used in content identification for detecting piracy or duplicate images.Published research of image content identification on perceptual hashing has mainly focused on the design of perceptual hashing algorithm, and very few theoretical methods have been developed for performance evaluation of content identification.Based on anti-collision property of image perceptual hashing, this paper introduces a mathematical model of content identification, defines performance formulas of content identification, and presents functions of the statistical distribution for performance evaluation.Experimental results show that the proposed approach estimates the performance of content identification on perceptual hashing very well.

  10. Study of the similarity function in Indexing-First-One hashing

    Science.gov (United States)

    Lai, Y.-L.; Jin, Z.; Goi, B.-M.; Chai, T.-Y.

    2017-06-01

    The recent proposed Indexing-First-One (IFO) hashing is a latest technique that is particularly adopted for eye iris template protection, i.e. IrisCode. However, IFO employs the measure of Jaccard Similarity (JS) initiated from Min-hashing has yet been adequately discussed. In this paper, we explore the nature of JS in binary domain and further propose a mathematical formulation to generalize the usage of JS, which is subsequently verified by using CASIA v3-Interval iris database. Our study reveals that JS applied in IFO hashing is a generalized version in measure two input objects with respect to Min-Hashing where the coefficient of JS is equal to one. With this understanding, IFO hashing can propagate the useful properties of Min-hashing, i.e. similarity preservation, thus favorable for similarity searching or recognition in binary space.

  11. CRYPTOGRAPHIC MEANS OF INFORMATION PROTECTION AND PSYCHOLOGICAL COMFORT OF THE USERS OF COMPUTER INFORMATIONAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Yurii A. Kotsiuk

    2010-09-01

    Full Text Available The article checks up the existence of functional relation between the level of psychological comfort of the users of computer informational systems and their awareness/skills to use cryptographic means of information protection.

  12. Security of Cooperative Intelligent Transport Systems: Standards, Threats Analysis and Cryptographic Countermeasures

    Directory of Open Access Journals (Sweden)

    Elyes Ben Hamida

    2015-07-01

    Full Text Available Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS. Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed.

  13. A Homomorphic Hashing Based Provable Data Possession%一种基于同态Hash的数据持有性证明方法

    Institute of Scientific and Technical Information of China (English)

    陈兰香

    2011-01-01

    在云存储服务中,为了让用户可以验证存储服务提供者正确地持有(保存)用户的数据,该文提出一种基于同态hash (homomorphic hashing)的数据持有性证明方法.因为同态hash算法的同态性,两数据块之和的hash值与它们hash值的乘积相等,初始化时存放所有数据块及其hash值,验证时存储服务器返回若干数据块的和及其hash值的乘积,用户计算这些数据块之和的hash值,然后验证是否与其hash值的乘积相等,从而达到持有性证明的目的.在数据生存周期内,用户可以无限次地验证数据是否被正确持有.该方法在提供数据持有性证明的同时,还可以对数据进行完整性保护.用户只需要保存密钥K,约520 byte,验证过程中需要传递的信息少,约18 bit,并且持有性验证时只需要进行一次同态hash运算.文中提供该方法的安全性分析,性能测试表明该方法是可行的.%In cloud storage, in order to allow users to verify that the storage service providers store the user's data intactly. A homomorphic hashing based Provable Data Possession (PDP) method is proposed. Because of the homomorphism of hash algorithm, the hash value of the sum of two blocks is equal to the product of the two hash values. It stores all data blocks and their hash values in setup stage. When the user challenges the storage server, the server returns the sum of the requested data blocks and their hash values. The user computes the hash value of the sum of these data blocks and verifies whether they are equal. In the data lifecycle, the user can perform unlimited number of verification. The method provides provable data possession at the same time it provides integrity protection. Users only need to save a key K, about 520 byte, the information transferred for verification only need about 18 bit, and verification only needs one time hash computation. The security and performance analysis show that the method is feasible.

  14. Modeling Conservative Updates in Multi-Hash Approximate Count Sketches

    OpenAIRE

    2012-01-01

    Multi-hash-based count sketches are fast and memory efficient probabilistic data structures that are widely used in scalable online traffic monitoring applications. Their accuracy significantly improves with an optimization, called conservative update, which is especially effective when the aim is to discriminate a relatively small number of heavy hitters in a traffic stream consisting of an extremely large number of flows. Despite its widespread application, a thorough u...

  15. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  16. The suffix-free-prefix-free hash function construction and its indifferentiability security analysis

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Knudsen, Lars R.

    2012-01-01

    In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value $$(IV)$$ of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD......) strengthening in the padding functionality of the hash functions. We propose a generic $$n$$-bit-iterated hash function framework based on an $$n$$-bit compression function called suffix-free-prefix-free (SFPF) that works for arbitrary $$IV$$s and does not possess MD strengthening. We formally prove that SFPF...

  17. Image Hash based on discrete curvelet transform%基于离散曲波变换的图像Hash算法

    Institute of Scientific and Technical Information of China (English)

    徐文娟; 易波

    2011-01-01

    为了提高图像Hash算法的鲁棒性,提出一种新的基于离散曲波变换的图像Hash算法.该算法首先对图像预处理,再进行基于“打包”的快速离散曲波变换,提取出包含图像主要特征的曲波低频系数和边缘信息较丰富的细节2层系数作为特征向量;然后用Logistic方程对特征向量加密;最后进行量化压缩得到图像Hash序列.实验结果表明,该算法与已有传统算法相比,具有更高的鲁棒性;能有效区分不同图像,具有易碎性;混沌系统的引入使算法具有安全性.%In order to improve the robustness of image Hash algorithm a new image Hash algorithm based on discrete curvelet transform is proposed. The image is firstly preprocessed, and then decomposed with discrete curvelet transform via wrapping. The curvelet coefficients of low frequency contained the main features of image and the coefficient of details of two layer contained rich edge information are as the feature vectors. And Logistic equation is used to encrypt the eigenvector. Finally, the image Hash sequence is obtained by quantization and compression. Experimental results show that the algorithm has better robustness compared to some other Hash method. It is fragility to different images. The chaos system enhances the security.

  18. Hash Based Least Significant Bit Technique For Video Steganography

    Directory of Open Access Journals (Sweden)

    Prof. Dr. P. R. Deshmukh ,

    2014-01-01

    Full Text Available The Hash Based Least Significant Bit Technique For Video Steganography deals with hiding secret message or information within a video.Steganography is nothing but the covered writing it includes process that conceals information within other data and also conceals the fact that a secret message is being sent.Steganography is the art of secret communication or the science of invisible communication. In this paper a Hash based least significant bit technique for video steganography has been proposed whose main goal is to embed a secret information in a particular video file and then extract it using a stego key or password. In this Least Significant Bit insertion method is used for steganography so as to embed data in cover video with change in the lower bit.This LSB insertion is not visible.Data hidding is the process of embedding information in a video without changing its perceptual quality. The proposed method involve with two terms that are Peak Signal to Noise Ratio (PSNR and the Mean Square Error (MSE .This two terms measured between the original video files and steganographic video files from all video frames where a distortion is measured using PSNR. A hash function is used to select the particular position for insertion of bits of secret message in LSB bits.

  19. Hetero-manifold Regularisation for Cross-modal Hashing.

    Science.gov (United States)

    Zheng, Feng; Tang, Yi; Shao, Ling

    2016-12-28

    Recently, cross-modal search has attracted considerable attention but remains a very challenging task because of the integration complexity and heterogeneity of the multi-modal data. To address both challenges, in this paper, we propose a novel method termed hetero-manifold regularisation (HMR) to supervise the learning of hash functions for efficient cross-modal search. A hetero-manifold integrates multiple sub-manifolds defined by homogeneous data with the help of cross-modal supervision information. Taking advantages of the hetero-manifold, the similarity between each pair of heterogeneous data could be naturally measured by three order random walks on this hetero-manifold. Furthermore, a novel cumulative distance inequality defined on the hetero-manifold is introduced to avoid the computational difficulty induced by the discreteness of hash codes. By using the inequality, cross-modal hashing is transformed into a problem of hetero-manifold regularised support vector learning. Therefore, the performance of cross-modal search can be significantly improved by seamlessly combining the integrated information of the hetero-manifold and the strong generalisation of the support vector machine. Comprehensive experiments show that the proposed HMR achieve advantageous results over the state-of-the-art methods in several challenging cross-modal tasks.

  20. Mining histopathological images via composite hashing and online learning.

    Science.gov (United States)

    Zhang, Xiaofan; Yang, Lin; Liu, Wei; Su, Hai; Zhang, Shaoting

    2014-01-01

    With a continuous growing amount of annotated histopathological images, large-scale and data-driven methods potentially provide the promise of bridging the semantic gap between these images and their diagnoses. The purpose of this paper is to increase the scale at which automated systems can entail scalable analysis of histopathological images in massive databases. Specifically, we propose a principled framework to unify hashing-based image retrieval and supervised learning. Concretely, composite hashing is designed to simultaneously fuse and compress multiple high-dimensional image features into tens of binary hash bits, enabling scalable image retrieval with a very low computational cost. Upon a local data subset that retains the retrieved images, supervised learning methods are applied on-the-fly to model image structures for accurate classification. Our framework is validated thoroughly on 1120 lung microscopic tissue images by differentiating adenocarcinoma and squamous carcinoma. The average accuracy as 87.5% with only 17ms running time, which compares favorably with other commonly used methods.

  1. Secure Minutiae-Based Fingerprint Templates Using Random Triangle Hashing

    Science.gov (United States)

    Jin, Zhe; Jin Teoh, Andrew Beng; Ong, Thian Song; Tee, Connie

    Due to privacy concern on the widespread use of biometric authentication systems, biometric template protection has gained great attention in the biometric research recently. It is a challenging task to design a biometric template protection scheme which is anonymous, revocable and noninvertible while maintaining acceptable performance. Many methods have been proposed to resolve this problem, and cancelable biometrics is one of them. In this paper, we propose a scheme coined as Random Triangle Hashing which follows the concept of cancelable biometrics in the fingerprint domain. In this method, re-alignment of fingerprints is not required as all the minutiae are translated into a pre-defined 2 dimensional space based on a reference minutia. After that, the proposed Random Triangle hashing method is used to enforce the one-way property (non-invertibility) of the biometric template. The proposed method is resistant to minor translation error and rotation distortion. Finally, the hash vectors are converted into bit-strings to be stored in the database. The proposed method is evaluated using the public database FVC2004 DB1. An EER of less than 1% is achieved by using the proposed method.

  2. Control of Chaotic Regimes in Encryption Algorithm Based on Dynamic Chaos

    OpenAIRE

    Sidorenko, V.; Mulyarchik, K. S.

    2013-01-01

    Chaotic regime of a dynamic system is a necessary condition determining cryptographic security of an encryption algorithm. A chaotic dynamic regime control method is proposed which uses parameters of nonlinear dynamics regime for an analysis of encrypted data.

  3. Cryptographically supported NFC tags in medication for better inpatient safety.

    Science.gov (United States)

    Özcanhan, Mehmet Hilal; Dalkılıç, Gökhan; Utku, Semih

    2014-08-01

    Reliable sources report that errors in drug administration are increasing the number of harmed or killed inpatients, during healthcare. This development is in contradiction to patient safety norms. A correctly designed hospital-wide ubiquitous system, using advanced inpatient identification and matching techniques, should provide correct medicine and dosage at the right time. Researchers are still making grouping proof protocol proposals based on the EPC Global Class 1 Generation 2 ver. 1.2 standard tags, for drug administration. Analyses show that such protocols make medication unsecure and hence fail to guarantee inpatient safety. Thus, the original goal of patient safety still remains. In this paper, a very recent proposal (EKATE) upgraded by a cryptographic function is shown to fall short of expectations. Then, an alternative proposal IMS-NFC which uses a more suitable and newer technology; namely Near Field Communication (NFC), is described. The proposed protocol has the additional support of stronger security primitives and it is compliant to ISO communication and security standards. Unlike previous works, the proposal is a complete ubiquitous system that guarantees full patient safety; and it is based on off-the-shelf, new technology products available in every corner of the world. To prove the claims the performance, cost, security and scope of IMS-NFC are compared with previous proposals. Evaluation shows that the proposed system has stronger security, increased patient safety and equal efficiency, at little extra cost.

  4. Physically unclonable cryptographic primitives using self-assembled carbon nanotubes

    Science.gov (United States)

    Hu, Zhaoying; Comeras, Jose Miguel M. Lobez; Park, Hongsik; Tang, Jianshi; Afzali, Ali; Tulevski, George S.; Hannon, James B.; Liehr, Michael; Han, Shu-Jen

    2016-06-01

    Information security underpins many aspects of modern society. However, silicon chips are vulnerable to hazards such as counterfeiting, tampering and information leakage through side-channel attacks (for example, by measuring power consumption, timing or electromagnetic radiation). Single-walled carbon nanotubes are a potential replacement for silicon as the channel material of transistors due to their superb electrical properties and intrinsic ultrathin body, but problems such as limited semiconducting purity and non-ideal assembly still need to be addressed before they can deliver high-performance electronics. Here, we show that by using these inherent imperfections, an unclonable electronic random structure can be constructed at low cost from carbon nanotubes. The nanotubes are self-assembled into patterned HfO2 trenches using ion-exchange chemistry, and the width of the trench is optimized to maximize the randomness of the nanotube placement. With this approach, two-dimensional (2D) random bit arrays are created that can offer ternary-bit architecture by determining the connection yield and switching type of the nanotube devices. As a result, our cryptographic keys provide a significantly higher level of security than conventional binary-bit architecture with the same key size.

  5. Deciding security properties for cryptographic protocols. Application to key cycles

    CERN Document Server

    Comon-Lundh, Hubert; Zalinescu, Eugen

    2007-01-01

    There has been a growing interest in applying formal methods for validating cryptographic protocols and many results have been obtained. In this paper, we re-investigate and extend the NP-complete decision procedure for a bounded number of sessions of Rusinowitch and Turuani. In this setting, constraint systems are now a standard for modeling security protocols. We provide a generic approach to decide general security properties by showing that any constraint system can be transformed in (possibly several) much simpler constraint systems that are called \\emph{solved forms}. As a consequence, we prove that deciding the existence of key cycles is NP-complete for a bounded number of sessions. Indeed, many recent results are concerned with interpreting proofs of security done in symbolic models in the more detailed models of computational cryptography. In the case of symmetric encryption, these results stringently demand that no key cycle (e.g. $\\{k\\}_k$) can be produced during the execution of protocols. We show...

  6. DESIGN OF A NEW SECURITY PROTOCOL USING HYBRID CRYPTOGRAPHY ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Dr.S.Subasree and Dr.N.K.Sakthivel

    2010-02-01

    Full Text Available A Computer Network is an interconnected group of autonomous computing nodes, which use a well defined, mutually agreed set of rules and conventions known as protocols, interact with one-another meaningfully and allow resource sharing preferably in a predictable and controllable manner. Communication has a major impact on today’s business. It is desired to communicate data with high security. Security Attacks compromises the security and hence various Symmetric and Asymmetric cryptographic algorithms have been proposed to achieve the security services such as Authentication, Confidentiality, Integrity, Non-Repudiation and Availability. At present, various types of cryptographic algorithms provide high security to information on controlled networks. These algorithms are required to provide data security and users authenticity. To improve the strength of these security algorithms, a new security protocol for on line transaction can be designed using combination of both symmetric and asymmetric cryptographic techniques. This protocol provides three cryptographic primitives such as integrity, confidentiality and authentication. These three primitives can be achieved with the help of Elliptic Curve Cryptography, Dual-RSA algorithm and Message Digest MD5. That is it uses Elliptic Curve Cryptography for encryption, Dual-RSA algorithm for authentication and MD-5 for integrity. This new security protocol has been designed for better security with integrity using a combination of both symmetric and asymmetric cryptographic techniques.

  7. 基于并行和变参数的混沌hash函数的构造与性能分析%Design and performance analysis of parallel chaos-based hash function with changeable parameter

    Institute of Scientific and Technical Information of China (English)

    冯艳茹; 李艳涛; 肖迪

    2011-01-01

    提出了一种基于可并行和变参数的混沌分段线性映射hash函数算法.该函数通过明文扩展将并行处理的明文消息矩阵元素信息关联起来,实现了并行性.由矩阵元素位置标号决定的可变参数和矩阵元素相应的ASCⅡ码值分别作为混沌分段线性映射的输入参数和迭代次数来生成相应明文的中间hash值.最终的128bit的hash值由中间hash值的异或而得到.计算机模拟表明,本算法具有较好的单向性、混乱、扩散性以及抗碰撞性,满足单向hash函数的各项性能要求.%This paper proposed a parallel chaos-based Hash function construction with changeable parameter. Implemented the parallelism of the hash function by message expansion which associated elements in plain message matrix. Generated the intermediate hash values by iterating chaotic piecewise linear map with changeable parameter decided by the position index of elements of message matrix and corresponding ASCII code values of elements of message matrix as the iteration times of the map. Obtained the final 128-bit hash value by logical XOR operation on intermediate hash values. Simulation results indicate that the algorithm has characteristics of one way, confusion and diffusivity, and collision-resistance, and it can satisfy various performance requirements of hash function.

  8. Cryptanalysis of the 10-Round Hash and Full Compression Function of SHAvite-3-512

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Leurent, Gaëtan; Mendel, Florian;

    2010-01-01

    In this paper, we analyze SHAvite-3-512 hash function, as proposed for round 2 of the SHA-3 competition. We present cryptanalytic results on 10 out of 14 rounds of the hash function SHAvite-3-512, and on the full 14 round compression function of SHAvite-3-512. We show a second preimage attack on ...

  9. Linear-XOR and Additive Checksums Don't Protect Damgard-Merkle Hashes

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John

    2008-01-01

    We consider the security of Damg\\aa{}rd-Merkle variants which compute linear-XOR or additive checksums over message blocks, intermediate hash values, or both, and process these checksums in computing the final hash value. We show that these Damg\\aa{}rd-Merkle variants gain almost no security agai...

  10. Cryptanalysis of Lin et al.'s Efficient Block-Cipher-Based Hash Function

    NARCIS (Netherlands)

    Liu, Bozhong; Gong, Zheng; Chen, Xiaohong; Qiu, Weidong; Zheng, Dong

    2010-01-01

    Hash functions are widely used in authentication. In this paper, the security of Lin et al.'s efficient block-cipher-based hash function is reviewed. By using Joux's multicollisions and Kelsey et al.'s expandable message techniques, we find the scheme is vulnerable to collision, preimage and second

  11. An Efficient Pattern Matching Algorithm

    Science.gov (United States)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  12. All-optical hash code generation and verification for low latency communications.

    Science.gov (United States)

    Paquot, Yvan; Schröder, Jochen; Pelusi, Mark D; Eggleton, Benjamin J

    2013-10-07

    We introduce an all-optical, format transparent hash code generator and a hash comparator for data packets verification with low latency at high baudrate. The device is reconfigurable and able to generate hash codes based on arbitrary functions and perform the comparison directly in the optical domain. Hash codes are calculated with custom interferometric circuits implemented with a Fourier domain optical processor. A novel nonlinear scheme featuring multiple four-wave mixing processes in a single waveguide is implemented for simultaneous phase and amplitude comparison of the hash codes before and after transmission. We demonstrate the technique with single polarisation BPSK and QPSK signals up to a data rate of 80 Gb/s.

  13. Highly scalable metadata distribution algorithm in mass storage system%海量存储系统中高扩展性元数据分布算法的研究

    Institute of Scientific and Technical Information of China (English)

    吴伟; 谢长生; 黄建忠; 张成峰

    2008-01-01

    Distribution of metadata in a metadata server cluster is important in mass storage system. A good distribution algorithm has a significant influence on the system performance, availability and scalability. Subtree partition and hash are two traditional metadata distribution algorithms used in distributed file systems. They both have a defect in system scalability. This paper proposes a new directory hash (DH) algorithm. By treating directory as the key value of hash function, implementing concentrated storage of metadata, pipelining operations and prefetching technology, DH algorithm can enhance the system scalability on the premise without sacrificing system performance.

  14. Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster

    Science.gov (United States)

    Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.

    2015-10-01

    Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.

  15. A Real-Time Performance Analysis Model for Cryptographic Protocols

    Directory of Open Access Journals (Sweden)

    Amos Olagunju

    2012-12-01

    Full Text Available Several encryption algorithms exist today for securing data in storage and transmission over network systems. The choice of encryption algorithms must weigh performance requirements against the call for protection of sensitive data. This research investigated the processing times of alternative encryption algorithms under specific conditions. The paper presents the architecture of a model multiplatform tool for the evaluation of candidate encryption algorithms based on different data and key sizes. The model software was used to appraise the real-time performance of DES, AES, 3DES, MD5, SHA1, and SHA2 encryption algorithms.

  16. Pseudorandom Numbers and Hash Functions from Iterations of Multivariate Polynomials

    CERN Document Server

    Ostafe, Alina

    2009-01-01

    Dynamical systems generated by iterations of multivariate polynomials with slow degree growth have proved to admit good estimates of exponential sums along their orbits which in turn lead to rather stronger bounds on the discrepancy for pseudorandom vectors generated by these iterations. Here we add new arguments to our original approach and also extend some of our recent constructions and results to more general orbits of polynomial iterations which may involve distinct polynomials as well. Using this construction we design a new class of hash functions from iterations of polynomials and use our estimates to motivate their "mixing" properties.

  17. Indexing Large Visual Vocabulary by Randomized Dimensions Hashing for High Quantization Accuracy: Improving the Object Retrieval Quality

    Science.gov (United States)

    Yang, Heng; Wang, Qing; He, Zhoucan

    The bag-of-visual-words approach, inspired by text retrieval methods, has proven successful in achieving high performance in object retrieval on large-scale databases. A key step of these methods is the quantization stage which maps the high-dimensional image feature vectors to discriminatory visual words. In this paper, we consider the quantization step as the nearest neighbor search in large visual vocabulary, and thus proposed a randomized dimensions hashing (RDH) algorithm to efficiently index and search the large visual vocabulary. The experimental results have demonstrated that the proposed algorithm can effectively increase the quantization accuracy compared to the vocabulary tree based methods which represent the state-of-the-art. Consequently, the object retrieval performance can be significantly improved by our method in the large-scale database.

  18. CRYPTANALYSIS OF HASH FUNCTIONS BASED ON CHAOTIC SYSTEM%基于混沌的Hash函数的安全性分析

    Institute of Scientific and Technical Information of China (English)

    谭雪; 周琥; 王世红

    2016-01-01

    With the development of modern cryptology,hash functions play an increasingly important role.In this paper,we analyse the security of two hash algorithms,one is a parallel hash function construction based on coupled map lattice,the other is the keyed serial hash function based on a dynamic lookup table.For the former,we find that the coupled map lattice leads to a structural defect in the algorithm. Under the condition of block index and block message meeting specific constraint,without the complicated computation it is able to directly give the intermediate hash value of the specific block index and block message.For the latter,we analyse the constraint condition of the state of a buffer that the collision is produced.Under this condition,the cost of output collisions of the algorithm found is O (2 100 ),much higher than that of the birthday attack.%随着现代密码学的发展,Hash函数算法越来越占有重要的地位。针对基于耦合映像格子的并行Hash函数算法和带密钥的基于动态查找表的串行Hash函数算法进行了安全性分析。对于前者,发现耦合映像格子系统导致算法中存在一种结构缺陷,在分组序号和分组消息满足特定约束关系的条件下,无需复杂的计算可以直接给出特定分组和消息的中间Hash值。对于后者,分析了产生碰撞缓存器状态的约束条件。在此条件下,找到算法的输出碰撞的代价为O (2100),远大于生日攻击的代价。

  19. An Efficient Trajectory Data Index Integrating R-tree, Hash and B*-tree

    Directory of Open Access Journals (Sweden)

    GONG Jun

    2015-05-01

    Full Text Available To take into account all of efficiency and query capability, this paper presents a new trajectory data index named HBSTR-tree. In HBSTR-tree, trajectory sample points are collectively stored into trajectory nodes sequentially. Hash table is adopted to index the most recent trajectory nodes of mobile targets, and trajectory nodes will not be inserted into spatio-temporal R-tree until full, which can enhance generation performance in this way. Meantime, one-dimensional index of trajectory nodes in the form of B*-tree is built. Therefore, HBSTR-tree can satisfy both spatio-temporal query and target trajectory query. In order to improve search efficiency, a new criterion for spatio-temporal R-tree and one new node-selection sub-algorithm are put forward, which further optimize insertion algorithm of spatio-temporal R-tree. Furthermore, a database storage scheme for spatio-temporal R-tree is also brought up. Experimental results prove that HBSTR-tree outperforms current methods in several aspects such as generation efficiency, query performance and supported query types, and then supports real-time updates and efficient accesses of huge trajectory database.

  20. Abstraction for Epistemic Model Checking of Dining Cryptographers-based Protocols

    CERN Document Server

    Al-Bataineh, Omar I

    2010-01-01

    The paper describes an abstraction for protocols that are based on multiple rounds of Chaum's Dining Cryptographers protocol. It is proved that the abstraction preserves a rich class of specifications in the logic of knowledge, including specifications describing what an agent knows about other agents' knowledge. This result can be used to optimize model checking of Dining Cryptographers-based protocols, and applied within a methodology for knowledge-based program implementation and verification. Some case studies of such an application are given, for a protocol that uses the Dining Cryptographers protocol as a primitive in an anonymous broadcast system. Performance results are given for model checking knowledge-based specifications in the concrete and abstract models of this protocol, and some new conclusions about the protocol are derived.

  1. SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?

    Science.gov (United States)

    Rührmair, Ulrich

    This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.

  2. A Partially Non-Cryptographic Security Routing Protocol in Mobile Ad Hoc Networks

    Institute of Scientific and Technical Information of China (English)

    CHEN Jing; CUI Guohua

    2006-01-01

    In this paper, we propose a partially non-cryptographic security routing protocol(PNCSR ) that protects both routing and data forwarding operations through the same reactive approach. PNCSR only apply public-key cryptographic system in managing token, but it doesn't utilize any cryptographic primitives on the routing messages. In PNCSR, each node is fair. Local neighboring nodes collaboratively monitor each other and sustain each other. It also uses a novel credit strategy which additively increases the token lifetime each time a node renews its token. We also analyze the storage, computation, and communication overhead of PNCSR, and provide a simple yet meaningful overhead comparison. Finally, the simulation results show the effectiveness of PNCSR in various situations.

  3. Trial encoding algorithms ensemble.

    Science.gov (United States)

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation.

  4. Efficient Secured Hash Based Password Authentication in Multiple Websites

    Directory of Open Access Journals (Sweden)

    T.S.Thangavel,

    2010-08-01

    Full Text Available The most commercial web sites rely on a relatively weak form of password authentication, the browser simply sends a user’s plaintext password to a remote web server, often using secure socket layer. Even when used over an encrypted connection, this form of password authentication is vulnerable to attack. In common password attacks, hackers exploit the fact that web users often use the same password at many different sites. This allows hackers to break into a low security site that simply stores username/passwords in the clear and use the retrieved passwords at a high security site. This work developed an improved secure hash function, whose security is directly related to the syndrome decoding problem from the theory of error-correcting codes. The proposal design and develop a user interface, and implementation of a browser extension, password hash, that strengthens web password authentication. Providing customized passwords, can reduce the threat of password attacks with no server changes and little or no change to the user experience. The proposed techniques are designed to transparently provide novice users with thebenefits of password practices that are otherwise only feasible for security experts. Experimentation are done with Internet Explorer and Fire fox implementations.

  5. Content-based image hashing using wave atoms

    Institute of Scientific and Technical Information of China (English)

    Liu Fang; Leung Hon-Yin; Cheng Lee-Ming; Ji Xiao-Yong

    2012-01-01

    It is well known that robustness,fragility,and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge.In this paper,a content-based image hashing scheme using wave atoms is proposed,which satisfies the above criteria.Compared with traditional transforms like wavelet transform and discrete cosine transform (DCT),wave atom transform is adopted for the sparser expansion and better characteristics of texture feature extraction which shows better performance in both robustness and fragility.In addition,multi-frequency detection is presented to provide an application-defined trade-off.To ensure the security of the proposed approach and its resistance to a chosen-plaintext attack,a randomized pixel modulation based on the Rényi chaotic map is employed,combining with the nonliner wave atom transform.The experimental results reveal that the proposed scheme is robust against content-preserving manipulations and has a good discriminative capability to malicious tampering.

  6. Performance Impacts of Lower-Layer Cryptographic Methods in Mobile Wireless Ad Hoc Networks

    Energy Technology Data Exchange (ETDEWEB)

    VAN LEEUWEN, BRIAN P.; TORGERSON, MARK D.

    2002-10-01

    In high consequence systems, all layers of the protocol stack need security features. If network and data-link layer control messages are not secured, a network may be open to adversarial manipulation. The open nature of the wireless channel makes mobile wireless mobile ad hoc networks (MANETs) especially vulnerable to control plane manipulation. The objective of this research is to investigate MANET performance issues when cryptographic processing delays are applied at the data-link layer. The results of analysis are combined with modeling and simulation experiments to show that network performance in MANETs is highly sensitive to the cryptographic overhead.

  7. An Extended Image Hashing Concept: Content-Based Fingerprinting Using FJLT

    Directory of Open Access Journals (Sweden)

    Xudong Lv

    2009-01-01

    Full Text Available Dimension reduction techniques, such as singular value decomposition (SVD and nonnegative matrix factorization (NMF, have been successfully applied in image hashing by retaining the essential features of the original image matrix. However, a concern of great importance in image hashing is that no single solution is optimal and robust against all types of attacks. The contribution of this paper is threefold. First, we introduce a recently proposed dimension reduction technique, referred as Fast Johnson-Lindenstrauss Transform (FJLT, and propose the use of FJLT for image hashing. FJLT shares the low distortion characteristics of a random projection, but requires much lower computational complexity. Secondly, we incorporate Fourier-Mellin transform into FJLT hashing to improve its performance under rotation attacks. Thirdly, we propose a new concept, namely, content-based fingerprint, as an extension of image hashing by combining different hashes. Such a combined approach is capable of tackling all types of attacks and thus can yield a better overall performance in multimedia identification. To demonstrate the superior performance of the proposed schemes, receiver operating characteristics analysis over a large image database and a large class of distortions is performed and compared with the state-of-the-art image hashing using NMF.

  8. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R.

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  9. Cryptanalysis of the Two-Dimensional Circulation Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Bart Preneel

    2005-07-01

    Full Text Available We analyze the security of the two-dimensional circulation encryption algorithm (TDCEA, recently published by Chen et al. in this journal. We show that there are several flaws in the algorithm and describe some attacks. We also address performance issues in current cryptographic designs.

  10. Protecting Cryptographic Keys and Functions from Malware Attacks

    Science.gov (United States)

    2010-12-01

    and values in each of various stages in the scrambling process. The possible attack routes are explained in Section 2.4 and analyzed in Section 2.5...February 2011, pp 28-43). 4. X. Li, P. Parker, and S. Xu. A Probabilistic Characterization of A Fault-Tolerant Gossiping Algorithm. Journal of Systems...4856, pp. 228-246, Springer, 2007. 7. Xiaohu Li, T. Paul Parker, and Shouhuai Xu. A Stochastic Characterization of a Fault-Tolerant Gossip Algorithm

  11. Analysis of cryptographic mechanisms used in ransomware CryptXXX v3

    Directory of Open Access Journals (Sweden)

    Michał Glet

    2016-12-01

    Full Text Available The main purpose of this paper was to analysis how malicious software is using cryptographic mechanisms. Reverse engineering were applied in order to discover mechanisms used in ransomware CryptXXX v3. At the end were given some useful advices how to improve CryptXXX.[b]Keyword:[/b] ransomware, software engineering, reverse engineering, RC4, RSA, malicious software

  12. Construction of cryptographic information protection in automated control systems for rapid reaction military forces

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2012-04-01

    Full Text Available New approaches to realizations of military operations are analyzed. The main factors that directly affect the construction and operation of information security subsystems in prospective automated command and control military systems are described. Possible ways of the construction of cryptographic subsystems of information protection in automated operation management systems for united military force groups are investigated.

  13. Analysis of cryptographic mechanisms used in ransomware CryptXXX v3

    OpenAIRE

    Michał Glet

    2016-01-01

    The main purpose of this paper was to analysis how malicious software is using cryptographic mechanisms. Reverse engineering were applied in order to discover mechanisms used in ransomware CryptXXX v3. At the end were given some useful advices how to improve CryptXXX.[b]Keyword:[/b] ransomware, software engineering, reverse engineering, RC4, RSA, malicious software

  14. Quality of Service Enhancement of Wireless Sensor Network Using Symmetric Key Cryptographic Schemes

    Directory of Open Access Journals (Sweden)

    Er. Gurjot Singh

    2014-07-01

    Full Text Available A Wireless Sensor Network is a combination of spatially distributed independent nodes deployed in dense environment, communicating wirelessly over limited bandwidth and frequency. Security and Qos is the major concern in wireless sensor network due to its wireless communication nature and constraints like low computation capability, less memory, bounded energy resources, susceptibility to physical capture or damages and the use of insecure wireless communication channels. These constraints make security along with the QoS, a challenge in wireless sensor network. The cryptographic schemes increases the level of security and make it secure against critical attacks but also has a significant impact on the QoS of wireless sensor network. In this paper, the different cryptographic schemes based on asymmetric key and symmetric key cryptography are evaluated. The symmetric key cryptography schemes require less time for processing, less power and also require less storage space as compared to asymmetric key cryptographic schemes, results in less impact on the QoS of wireless sensor network. In this paper, the QoS of wireless sensor network along with cryptographic schemes will be evaluated on the basis of metrics like throughput, jitter, end-to-end delay, total packet received and energy consumption.

  15. Planetary Nebula Candidates Uncovered with the HASH Research Platform

    CERN Document Server

    Fragkou, Vasiliki; Frew, David; Parker, Quentin

    2016-01-01

    A detailed examination of new high quality radio catalogues (e.g. Cornish) in combination with available mid-infrared (MIR) satellite imagery (e.g. Glimpse) has allowed us to find 70 new planetary nebula (PN) candidates based on existing knowledge of their typical colors and fluxes. To further examine the nature of these sources, multiple diagnostic tools have been applied to these candidates based on published data and on available imagery in the HASH (Hong Kong/ AAO/ Strasbourg H{\\alpha} planetary nebula) research platform. Some candidates have previously-missed optical counterparts allowing for spectroscopic follow-up. Indeed, the single object spectroscopically observed so far has turned out to be a bona fide PN.

  16. Encrypted data inquiries using chained perfect hashing (CPH)

    Science.gov (United States)

    Kaabneh, Khalid; Tarawneh, Hassan; Alhadid, Issam

    2017-09-01

    Cryptography is the practice of transforming data to indecipherable by a third party, unless a particular piece of secret information is made available to them. Data encryption has been paid a great attention to protect data. As data sizes are growing, so does the need for efficient data search while being encrypted to protect it during transmission and storage. This research is based on our previous and continuous work to speed up and enhance global heuristic search on an encrypted data. This research is using chained hashing approach to reduce the search time and decrease the collision rate which most search techniques suffers from. The results were very encouraging and will be discussed in the experimental results section.

  17. Improved Collision Search for Hash Functions: New Advanced Message Modification

    Science.gov (United States)

    Naito, Yusuke; Ohta, Kazuo; Kunihiro, Noboru

    In this paper, we discuss the collision search for hash functions, mainly in terms of their advanced message modification. The advanced message modification is a collision search tool based on Wang et al.'s attacks. Two advanced message modifications have previously been proposed: cancel modification for MD4 and MD5, and propagation modification for SHA-0. In this paper, we propose a new concept of advanced message modification, submarine modification. As a concrete example combining the ideas underlying these modifications, we apply submarine modification to the collision search for SHA-0. As a result, we show that this can reduce the collision search attack complexity from 239 to 236 SHA-0 compression operations.

  18. Construction of secure and fast hash functions using nonbinary error-correcting codes

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Preneel, Bart

    2002-01-01

    This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with nl-bit outputs for integers n>1 based on error-correcting codes and secure compression functions with l-bit outputs. This leads to simple and practical hash function construct......, some new attacks are presented that essentially match the presented lower bounds. The constructions allow for a large degree of internal parallelism. The limits of this approach are studied in relation to bounds derived in coding theory.......This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with nl-bit outputs for integers n>1 based on error-correcting codes and secure compression functions with l-bit outputs. This leads to simple and practical hash function...

  19. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    Science.gov (United States)

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  20. Fully De-Amortized Cuckoo Hashing for Cache-Oblivious Dictionaries and Multimaps

    CERN Document Server

    Goodrich, Michael T; Mitzenmacher, Michael; Thaler, Justin

    2011-01-01

    A dictionary (or map) is a key-value store that requires all keys be unique, and a multimap is a key-value store that allows for multiple values to be associated with the same key. We design hashing-based indexing schemes for dictionaries and multimaps that achieve worst-case optimal performance for lookups and updates, with a small or negligible probability the data structure will require a rehash operation, depending on whether we are working in the the external-memory (I/O) model or one of the well-known versions of the Random Access Machine (RAM) model. One of the main features of our constructions is that they are \\emph{fully de-amortized}, meaning that their performance bounds hold without one having to tune their constructions with certain performance parameters, such as the constant factors in the exponents of failure probabilities or, in the case of the external-memory model, the size of blocks or cache lines and the size of internal memory (i.e., our external-memory algorithms are cache oblivious). ...

  1. The Study of Detecting Replicate Documents Using MD5 Hash Function

    Directory of Open Access Journals (Sweden)

    Pushpendra Singh Tomar

    2011-12-01

    Full Text Available A great deal of the Web is replicate or near- replicate content. Documents may be served in different formats: HTML, PDF, and Text for different audiences. Documents may get mirrored to avoid delays or to provide fault tolerance. Algorithms for detecting replicate documents are critical in applications where data is obtained from multiple sources. The removal of replicate documents is necessary, not only to reduce runtime, but also to improve search accuracy. Today, search engine crawlers are retrieving billions of unique URL’s, of which hundreds of millions are replicates of some form. Thus, quickly identifying replicate detection expedites indexing and searching. One vendor’s analysis of 1.2 billion URL’s resulted in 400 million exact replicates found with a MD5 hash. Reducing the collection sizes by tens of percentage point’s results in great savings in indexing time and a reduction in the amount of hardware required to support the system. Last and probably more significant, users benefit by eliminating replicate results. By efficiently presenting only unique documents, user satisfaction is likely to increase.

  2. The Study of Detecting Replicate Documents Using MD5 Hash Functio

    Directory of Open Access Journals (Sweden)

    Mr. Pushpendra Singh Tomar

    2011-09-01

    Full Text Available A great deal of the Web is replicate or near- replicate content. Documents may be served in different formats: HTML, PDF, and Text for different audiences. Documents may get mirrored to avoid delays or to provide fault tolerance. Algorithms for detecting replicate documents are critical in applications where data is obtained from multiple sources. The removal of replicate documents is necessary, not only to reduce runtime, but also to improve search accuracy. Today, search engine crawlers are retrieving billions of unique URL’s, of which hundreds of millions are replicates of some form. Thus, quickly identifying replicate detection expedites indexing and searching. One vendor’s analysis of 1.2 billion URL’s resulted in 400 million exact replicates found with a MD5 hash. Reducing the collection sizes by tens of percentage point’s results in great savings in indexing time and a reduction in the amount of hardware required to support the system. Last and probably more significant, users benefit by eliminating replicate results. By efficiently presenting only unique documents, user satisfaction is likely to increase.

  3. [A fast non-local means algorithm for denoising of computed tomography images].

    Science.gov (United States)

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  4. HashDist: Reproducible, Relocatable, Customizable, Cross-Platform Software Stacks for Open Hydrological Science

    Science.gov (United States)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Developing scientific software is a continuous balance between not reinventing the wheel and getting fragile codes to interoperate with one another. Binary software distributions such as Anaconda provide a robust starting point for many scientific software packages, but this solution alone is insufficient for many scientific software developers. HashDist provides a critical component of the development workflow, enabling highly customizable, source-driven, and reproducible builds for scientific software stacks, available from both the IPython Notebook and the command line. To address these issues, the Coastal and Hydraulics Laboratory at the US Army Engineer Research and Development Center has funded the development of HashDist in collaboration with Simula Research Laboratories and the University of Texas at Austin. HashDist is motivated by a functional approach to package build management, and features intelligent caching of sources and builds, parametrized build specifications, and the ability to interoperate with system compilers and packages. HashDist enables the easy specification of "software stacks", which allow both the novice user to install a default environment and the advanced user to configure every aspect of their build in a modular fashion. As an advanced feature, HashDist builds can be made relocatable, allowing the easy redistribution of binaries on all three major operating systems as well as cloud, and supercomputing platforms. As a final benefit, all HashDist builds are reproducible, with a build hash specifying exactly how each component of the software stack was installed. This talk discusses the role of HashDist in the hydrological sciences, including its use by the Coastal and Hydraulics Laboratory in the development and deployment of the Proteus Toolkit as well as the Rapid Operational Access and Maneuver Support project. We demonstrate HashDist in action, and show how it can effectively support development, deployment, teaching, and

  5. The Books Recommend Service System Based on Improved Algorithm for Mining Association Rules

    Institute of Scientific and Technical Information of China (English)

    王萍

    2009-01-01

    The Apriori algorithm is a classical method of association rules mining. Based on analysis of this theory, the paper provides an improved Apriori algorithm. The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to en-hance the usage efficiency of resources as well as the individualized service of the data library.

  6. Bin-Hash Indexing: A Parallel Method for Fast Query Processing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng; Bethel, Edward Wes; Owens, John D.; Joy, Kenneth I.

    2008-06-27

    This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not be resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.

  7. Practical security and privacy attacks against biometric hashing using sparse recovery

    Science.gov (United States)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  8. Secure signal processing: Privacy preserving cryptographic protocols for multimedia

    NARCIS (Netherlands)

    Erkin, Z.

    2010-01-01

    Recent advances in technology provided a suitable environment for the people in which they can benefit from online services in their daily lives. Despite several advantages, online services also constitute serious privacy risks for their users as the main input to algorithms are privacy sensitive su

  9. Secure signal processing: Privacy preserving cryptographic protocols for multimedia

    NARCIS (Netherlands)

    Erkin, Z.

    2010-01-01

    Recent advances in technology provided a suitable environment for the people in which they can benefit from online services in their daily lives. Despite several advantages, online services also constitute serious privacy risks for their users as the main input to algorithms are privacy sensitive

  10. Refined repetitive sequence searches utilizing a fast hash function and cross species information retrievals

    Directory of Open Access Journals (Sweden)

    Reneker Jeff

    2005-05-01

    Full Text Available Abstract Background Searching for small tandem/disperse repetitive DNA sequences streamlines many biomedical research processes. For instance, whole genomic array analysis in yeast has revealed 22 PHO-regulated genes. The promoter regions of all but one of them contain at least one of the two core Pho4p binding sites, CACGTG and CACGTT. In humans, microsatellites play a role in a number of rare neurodegenerative diseases such as spinocerebellar ataxia type 1 (SCA1. SCA1 is a hereditary neurodegenerative disease caused by an expanded CAG repeat in the coding sequence of the gene. In bacterial pathogens, microsatellites are proposed to regulate expression of some virulence factors. For example, bacteria commonly generate intra-strain diversity through phase variation which is strongly associated with virulence determinants. A recent analysis of the complete sequences of the Helicobacter pylori strains 26695 and J99 has identified 46 putative phase-variable genes among the two genomes through their association with homopolymeric tracts and dinucleotide repeats. Life scientists are increasingly interested in studying the function of small sequences of DNA. However, current search algorithms often generate thousands of matches – most of which are irrelevant to the researcher. Results We present our hash function as well as our search algorithm to locate small sequences of DNA within multiple genomes. Our system applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. We discuss our incorporation of the Gene Ontology (GO database into these algorithms. We conduct an exhaustive time analysis of our system for various repetitive sequence lengths. For instance, a search for eight bases of sequence within 3.224 GBases on 49 different chromosomes takes 1.147 seconds on average. To illustrate the relevance of the search results, we conduct a search with and without added annotation terms for the

  11. From Divisibility by 6 To the Euclidean Algorithm and the RSA Cryptographic Method.

    Science.gov (United States)

    Cosgrave, John B.

    1997-01-01

    Argues for the rich development of mathematical ideas that can flow from considering the apparently simple question of finding a divisibility test for the number six. Presents approaches to teaching this topic that could be interesting to teachers. (ASK)

  12. An Analysis of the Computer Security Ramifications of Weakened Asymmetric Cryptographic Algorithms

    Science.gov (United States)

    2012-06-01

    wired Ethernet, with one of the highest growth rates in mobile and consumer electronic devices (Wi-Fi Alliance 2). One-third of US households with...might wonder if IPv6 provides an additional level of security. That is a common myth. Generally speaking, IPv6 does next to nothing more for...security than does IPv4” (Convery 1). IPv6 was originally intended to include the use of IPsec, but as of RFC 6434, the RFC has softened its language

  13. Techniques for Performance Improvement of Integer Multiplication in Cryptographic Applications

    Directory of Open Access Journals (Sweden)

    Robert Brumnik

    2014-01-01

    Full Text Available The problem of arithmetic operations performance in number fields is actively researched by many scientists, as evidenced by significant publications in this field. In this work, we offer some techniques to increase performance of software implementation of finite field multiplication algorithm, for both 32-bit and 64-bit platforms. The developed technique, called “delayed carry mechanism,” allows to preventing necessity to consider a significant bit carry at each iteration of the sum accumulation loop. This mechanism enables reducing the total number of additions and applies the modern parallelization technologies effectively.

  14. Evaluation of Information Leakage from Cryptographic Hardware via Common-Mode Current

    Science.gov (United States)

    Hayashi, Yu-Ichi; Homma, Naofumi; Mizuki, Takaaki; Sugawara, Takeshi; Kayano, Yoshiki; Aoki, Takafumi; Minegishi, Shigeki; Satoh, Akashi; Sone, Hideaki; Inoue, Hiroshi

    This paper presents a possibility of Electromagnetic (EM) analysis against cryptographic modules outside their security boundaries. The mechanism behind the information leakage is explained from the view point of Electromagnetic Compatibility: electric fluctuation released from cryptographic modules can conduct to peripheral circuits based on ground bounce, resulting in radiation. We demonstrate the consequence of the mechanism through experiments where the ISO/IEC standard block cipher AES (Advanced Encryption Standard) is implemented on an FPGA board and EM radiations from power and communication cables are measured. Correlation Electromagnetic Analysis (CEMA) is conducted in order to evaluate the information leakage. The experimental results show that secret keys are revealed even though there are various disturbing factors such as voltage regulators and AC/DC converters between the target module and the measurement points. We also discuss information-suppression techniques as electrical-level countermeasures against such CEMAs.

  15. Evolution of Electronic Passport Scheme using Cryptographic Protocol along with Biometrics Authentication System

    Directory of Open Access Journals (Sweden)

    V.K. Narendira Kumar

    2012-03-01

    Full Text Available Millions of citizens around the world have already acquired their new electronic passport. The e-passport is equipped with contactless chip which stores personal data of the passport holder, information about the passport and the issuing institution, as well as with a multiple biometrics enabling cryptographic functionality. Countries are required to build a Public Key Infrastructure, biometric and Radio Frequency Identification to support various cryptographic, as this is considered the basic tools to prove the authenticity and integrity of the Machine Readable Travel Documents. The large-scale worldwide PKI is construction, by means of bilateral trust relationships between Countries. Investigate the good practices, which are essential for the establishment of a global identification scheme based on e-passports. The paper explores the privacy and security implications of this impending worldwide experiment in biometrics authentication technology.

  16. Cryptographic applications of analytic number theory complexity lower bounds and pseudorandomness

    CERN Document Server

    2003-01-01

    The book introduces new ways of using analytic number theory in cryptography and related areas, such as complexity theory and pseudorandom number generation. Key topics and features: - various lower bounds on the complexity of some number theoretic and cryptographic problems, associated with classical schemes such as RSA, Diffie-Hellman, DSA as well as with relatively new schemes like XTR and NTRU - a series of very recent results about certain important characteristics (period, distribution, linear complexity) of several commonly used pseudorandom number generators, such as the RSA generator, Blum-Blum-Shub generator, Naor-Reingold generator, inversive generator, and others - one of the principal tools is bounds of exponential sums, which are combined with other number theoretic methods such as lattice reduction and sieving - a number of open problems of different level of difficulty and proposals for further research - an extensive and up-to-date bibliography Cryptographers and number theorists will find th...

  17. The generation of shared cryptographic keys through channel impulse response estimation at 60 GHz.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Derek P.; Forman, Michael A.; Dowdle, Donald Ryan

    2010-09-01

    Methods to generate private keys based on wireless channel characteristics have been proposed as an alternative to standard key-management schemes. In this work, we discuss past work in the field and offer a generalized scheme for the generation of private keys using uncorrelated channels in multiple domains. Proposed cognitive enhancements measure channel characteristics, to dynamically change transmission and reception parameters as well as estimate private key randomness and expiration times. Finally, results are presented on the implementation of a system for the generation of private keys for cryptographic communications using channel impulse-response estimation at 60 GHz. The testbed is composed of commercial millimeter-wave VubIQ transceivers, laboratory equipment, and software implemented in MATLAB. Novel cognitive enhancements are demonstrated, using channel estimation to dynamically change system parameters and estimate cryptographic key strength. We show for a complex channel that secret key generation can be accomplished on the order of 100 kb/s.

  18. Revoke and Let Live: A Secure Key Revocation API for Cryptographic Devices

    OpenAIRE

    Cortier, Véronique; Steel,Graham; Wiedling, Cyrille

    2012-01-01

    While extensive research addresses the problem of establishing session keys through cryptographic protocols, relatively little work has appeared addressing the problem of revocation and update of long term keys. We present an API for symmetric key management on embedded devices that supports revocation and prove security properties design in the symbolic model of cryptography. Our API supports two modes of revocation: a passive mode where keys have an expiration time, and an active mode where...

  19. HASH Index on Non-primary Key Columns of EMS Real-time Database Based on Shared Memory%基于共享内存的能量管理系统实时库非主键HASH索引

    Institute of Scientific and Technical Information of China (English)

    王瑾; 彭晖; 侯勇

    2011-01-01

    实时库是能量管理系统的核心之一,大部分实时数据的处理基于实时库.引入索引能够极大地优化实时库查找操作,提高实时库性能.文中介绍了HASH索引的查找算法和实现方式,设计了针对“父找子”型关系查找的双溢出型HASH索引,并介绍了其数据结构和查找算法.分析数据表明,双溢出索引适合于“父找子”型关系的查找,具有很高的查找效率.%The real-time database is one of the cores of the energy management system (EMS), on which is based most of the real-time data processing. Introducing the index will greatly optimize the real-time database searching operation to improve its performance. The searching algorithm is described alongside the realization of the HASH index, the HASH index with double overflow areas for,"parent-child" type searching is designed and its data structure and search algorithm are treated. It is shown that the HASH index with double overflow areas is a suitable index fox "parent-child" type searching with high search efficiency.

  20. SECOM: A novel hash seed and community detection based-approach for genome-scale protein domain identification

    KAUST Repository

    Fan, Ming

    2012-06-28

    With rapid advances in the development of DNA sequencing technologies, a plethora of high-throughput genome and proteome data from a diverse spectrum of organisms have been generated. The functional annotation and evolutionary history of proteins are usually inferred from domains predicted from the genome sequences. Traditional database-based domain prediction methods cannot identify novel domains, however, and alignment-based methods, which look for recurring segments in the proteome, are computationally demanding. Here, we propose a novel genome-wide domain prediction method, SECOM. Instead of conducting all-against-all sequence alignment, SECOM first indexes all the proteins in the genome by using a hash seed function. Local similarity can thus be detected and encoded into a graph structure, in which each node represents a protein sequence and each edge weight represents the shared hash seeds between the two nodes. SECOM then formulates the domain prediction problem as an overlapping community-finding problem in this graph. A backward graph percolation algorithm that efficiently identifies the domains is proposed. We tested SECOM on five recently sequenced genomes of aquatic animals. Our tests demonstrated that SECOM was able to identify most of the known domains identified by InterProScan. When compared with the alignment-based method, SECOM showed higher sensitivity in detecting putative novel domains, while it was also three orders of magnitude faster. For example, SECOM was able to predict a novel sponge-specific domain in nucleoside-triphosphatase (NTPases). Furthermore, SECOM discovered two novel domains, likely of bacterial origin, that are taxonomically restricted to sea anemone and hydra. SECOM is an open-source program and available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2012 Fan et al.

  1. Using Animation in Active Learning Tool to Detect Possible Attacks in Cryptographic Protocols

    Science.gov (United States)

    Ali Mayouf, Mabroka; Shukur, Zarina

    Interactive Visualization tools for active learning of generic cryptographic protocols are very few. Although these tools provide the possibility to engage the learner by asking him to describe a cryptographic protocol using a simple visual metaphor to represent the abstraction of the concepts being visualized, the problem is that some cryptographic operations are not visualized or animated and hidden from the learner's perspective such as encryption/decryption actions. Other operations are not supported by these tools such as timestamp and freshness. So, it's difficult to cover all possible attack that the intruder might employ with such operations are missing. The purpose of this research is to provide an interactive visualization tool for teaching undergraduate students security protocols concepts especially key distribution, multiple operations such as encryption/decryption and signed/unsigned operations, and possible protocol attacks. By designing a high quality graphical user interface and simple visual metaphor, learners will be able to specify the protocols and consider the possible attack at each step of protocol demonstration.

  2. Apriori Association Rule Algorithms using VMware Environment

    Directory of Open Access Journals (Sweden)

    R. Sumithra

    2014-07-01

    Full Text Available The aim of this study is to carry out a research in distributed data mining using cloud platform. Distributed Data mining becomes a vital component of big data analytics due to the development of network and distributed technology. Map-reduce hadoop framework is a very familiar concept in big data analytics. Association rule algorithm is one of the popular data mining techniques which finds the relationships between different transactions. A work has been executed using weighted apriori and hash T apriori algorithms for association rule mining on a map reduce hadoop framework using a retail data set of transactions. This study describes the above concepts, explains the experiment carried out with retail data set on a VMW are environment and compares the performances of weighted apriori and hash-T apriori algorithms in terms of memory and time.

  3. Hybrid Algorithm for Optimal Load Sharing in Grid Computing

    Directory of Open Access Journals (Sweden)

    A. Krishnan

    2012-01-01

    Full Text Available Problem statement: Grid Computing is the fast growing industry, which shares the resources in the organization in an effective manner. Resource sharing requires more optimized algorithmic structure, otherwise the waiting time and response time are increased and the resource utilization is reduced. Approach: In order to avoid such reduction in the performances of the grid system, an optimal resource sharing algorithm is required. In recent days, many load sharing technique are proposed, which provides feasibility but there are many critical issues are still present in these algorithms. Results: In this study a hybrid algorithm for optimization of load sharing is proposed. The hybrid algorithm contains two components which are Hash Table (HT and Distributed Hash Table (DHT. Conclusion: The results of the proposed study show that the hybrid algorithm will optimize the task than existing systems.

  4. Hash-chain-based authentication for IoT

    Directory of Open Access Journals (Sweden)

    Antonio PINTO

    2016-12-01

    Full Text Available The number of everyday interconnected devices continues to increase and constitute the Internet of Things (IoT. Things are small computers equipped with sensors and wireless communications capabilities that are driven by energy constraints, since they use batteries and may be required to operate over long periods of time. The majority of these devices perform data collection. The collected data is stored on-line using web-services that, sometimes, operate without any special considerations regarding security and privacy. The current work proposes a modified hash-chain authentication mechanism that, with the help of a smartphone, can authenticate each interaction of the devices with a REST web-service using One Time Passwords (OTP while using open wireless networks. Moreover, the proposed authentication mechanism adheres to the stateless, HTTP-like behavior expected of REST web-services, even allowing the caching of server authentication replies within a predefined time window. No other known web-service authentication mechanism operates in such manner.

  5. Recent development of perceptual image hashing%稳健图像Hash研究进展

    Institute of Scientific and Technical Information of China (English)

    王朔中; 张新鹏

    2007-01-01

    The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the academia as well as the industry. On the other hand, efficient search of the huge amount of images has become a great challenge. Image hashing is a technique suitable for use in image authentication and content based image retrieval (CBIR). In this article,we review some representative image hashing techniques proposed in the recent years, with emphases on how to meet the conflicting requirements of perceptual robustness and security. Following a brief introduction to some earlier methods, we focus on a typical two-stage structure and some geometric-distortion resilient techniques. We then introduce two image hashing approaches developed in our own research, and reveal security problems in some existing methods due to the absence of secret keys in certain stage of the image feature extraction, or availability of a large quantity of images, keys, or the hash function to the adversary. More research efforts are needed in developing truly robust and secure image hashing techniques.

  6. 基于Hadoop的高效连接查询处理算法CHMJ%Efficient Join Query Processing Algorithm CHMJ Based on Hadoop

    Institute of Scientific and Technical Information of China (English)

    赵彦荣; 王伟平; 孟丹; 张书彬; 李均

    2012-01-01

    This paper proposes a join query processing algorithm CoLocationHashMapJoin (CHMJ). First the study designs a multi-copy consistency hash algorithm. The algorithm distributes the data of tables over the cluster according to the hash values of the join property, which improves the data locality while ensure data availability. Second, based on the multi-copy consistency hash algorithm, the study proposes a parallel join query processing algorithm called HashMapJoin. HashMapJoin improves the efficiency of join query significantly. CHMJ has been used in Tencent's data warehouse system, and plays an important role in Tencent's daily analysis tasks. The results show that CHMJ improves the efficiency of join query processing by five times comparing to Hive.%提出了一种并行连接查询处理算法CoLocationHashMapJoin(CHMJ).首先,设计了多副本一致性哈希算法,将具有连接关系的表根据其连接属性的哈希值在机群中进行分布,在提升了连接查询处理中数据本地性的同时,保证了数据的可用性;其次,基于多副本一致性哈希数据分布,提出了HashMapJoin并行连接查询处理算法,有效地提高了连接查询的处理效率.CHMJ算法在腾讯公司的数据仓库系统中进行了应用,结果表明,CHMJ连接查询的处理效率比Hive系统提高了近5倍.

  7. Research of the test generation algorithm based on search state dominance for combinational circuit

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    On the basis of EST (Equivalent STate hashing) algorithm, this paper researches a kind of test generation algorithm based on search state dominance for combinational circuit. According to the dominance relation of the E-frontier ( evaluation frontier), we can prove that this algorithm can terminate unnecessary searching step of test pattern earlier than the EST algorithm through some examples, so this algorithm can reduce the time of test generation. The test patterns calculated can detect faults given through simulation.

  8. Optimal Cryptographic Technique to increase the Data Security

    Directory of Open Access Journals (Sweden)

    K Laxmi Narayan

    2013-03-01

    Full Text Available There are many aspects to security ranging from secure commerce and payments to private communications and protecting passwords. One essential aspect for secure communications is that of secret key Cryptography. It is the automated method in which security goals are accomplished. It includes the process of encryption that converts plain-text into cipher-text. The process of decryption reconverts the cipher-text into plain-text. Secure communication is the prime requirement of every organization. To achieve this, one can use many techniques or algorithms available for Cryptography. In this context, we analyze and implement extremely protected cryptography scheme using the properties of quaternion which is the extension of the complex numbers and Farey fractions, Farey sequence of order n is the sequence of completely reduced fractions between 0 and 1. The proposed techniques in this paper can help in increasing the accurateness and wholeness of network topology discovery and can control existing protocol and hardware features, and also can be implemented easily.

  9. Internal differential collision attacks on the reduced-round Grøstl-0 hash function

    DEFF Research Database (Denmark)

    Ideguchi, Kota; Tischhauser, Elmar Wolfgang; Preneel, Bart

    2014-01-01

    . This results in collision attacks and semi-free-start collision attacks on the Grøstl-0 hash function and compression function with reduced rounds. Specifically, we show collision attacks on the Grøstl-0-256 hash function reduced to 5 and 6 out of 10 rounds with time complexities 248 and 2112 and on the Grøstl......-0-512 hash function reduced to 6 out of 14 rounds with time complexity 2183. Furthermore, we demonstrate semi-free-start collision attacks on the Grøstl-0-256 compression function reduced to 8 rounds and the Grøstl-0-512 compression function reduced to 9 rounds. Finally, we show improved...

  10. A fast approximate nearest neighbor search algorithm in the Hamming space.

    Science.gov (United States)

    Esmaeili, Mani Malek; Ward, Rabab Kreidieh; Fatourechi, Mehrdad

    2012-12-01

    A fast approximate nearest neighbor search algorithm for the (binary) Hamming space is proposed. The proposed Error Weighted Hashing (EWH) algorithm is up to 20 times faster than the popular locality sensitive hashing (LSH) algorithm and works well even for large nearest neighbor distances where LSH fails. EWH significantly reduces the number of candidate nearest neighbors by weighing them based on the difference between their hash vectors. EWH can be used for multimedia retrieval and copy detection systems that are based on binary fingerprinting. On a fingerprint database with more than 1,000 videos, for a specific detection accuracy, we demonstrate that EWH is more than 10 times faster than LSH. For the same retrieval time, we show that EWH has a significantly better detection accuracy with a 15 times lower error rate.

  11. INTEGRITY CONTROL ALGORITHM FOR DEPERSONALIZED INFORMATION IN INFORMATION SYSTEMS OF PERSONAL DATA

    Directory of Open Access Journals (Sweden)

    Y. A. Gatchin

    2013-01-01

    Full Text Available The article deals with the problem of integrity ensuring for the information processed in information systems of personal data by depersonalization data algorithm. In order to solve this problem, we propose to use integrity algorithm, based on hash functions to confirm the immutability of depersonalized data stored in the information systems of personal data.

  12. Efficient Algorithms for gcd and Cubic Residuosity in the Ring of Eisenstein Integers

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd and deri...... primality tests and the implementation of cryptographic protocols....

  13. Key Pre-Distribution Management Scheme for Wireless Sensor Based on Hash%基于Hash的无线传感器密钥预分配方案

    Institute of Scientific and Technical Information of China (English)

    余嘉; 许可; 彭文兵

    2011-01-01

    提出了一种基于查找表映射Hash的无线传感器网络密钥预分配管理方案.该方案利用查找表加密算法同时具有加密和生成Hash值的特性,动态地生成节点间通信的公共密钥.引入分簇节点方式来提高网络的连通性和存活率,并节省了存储空间.仿真结果及实验分析表明,本方案具有良好的安全性、连通性和抗捕获能力,并可有效地利用存储空间.%Key pre-distribution and management scheme of wireless sensor network based on Hash constructed by look-up table is proposed in this paper. The scheme dynamically generates the public keys for nodes communication through the look-up table encryption algorithm, which can encrypt and hash the text at the same time. To improve the connectivity and livability of the network and also to save the memory, the clustered sensor network style is introduced. Theoretical analysis and computer simulation indicate that the presented scheme can improve the performance of security, connectivity, anti-captured ability and can effectively utilize the memory as well.

  14. 基于空间伸缩结构的参数可控的混沌Hash函数%The chaotic hash function based on spatial expansion construction with controllable parameters

    Institute of Scientific and Technical Information of China (English)

    廖东; 王小敏; 张家树; 张文芳

    2012-01-01

    A novel chaotic one-way hash function based on spatial expansion construction with controllable parameter is presented which combines with the advantages of both chaotic system and parallel hash function. In the proposed approach, the hash model of message block is determined by chaotic dynamic parameter. The new method improves the security of hash function and avoids degrading the system performance at the same time. Theoretical and experimental results show that the proposed method has high performance in parallel algorithm, nearly uniform distribution and desired diffusion and confusion properties.%结合并行Hash函数和多混沌的设计思想,提出了一种基于空间伸缩结构的参数可控的混沌Hash函数构造方法.该方法结合空间结构的伸缩特性,使用动态密钥控制消息在空间的“膨胀一收缩一置乱”方式,有效地提高了系统的混乱和扩散特性,同时使用空间并行结构在提高Hash函数安全性的同时也提高了系统的执行效率.研究结果表明:新算法的并行计算速度快,且产生的Hash序列满足均匀分布,具有更为理想的混淆与扩散特性.

  15. Efficient cryptographic substitution box design using travelling salesman problem and chaos

    Directory of Open Access Journals (Sweden)

    Musheer Ahmad

    2016-09-01

    Full Text Available Symmetric encryption has been a standout amongst the most reliable option by which security is accomplished. In modern block symmetric cyphers, the substitution-boxes have been playing a critical role of nonlinear components that drives the actual security of cyphers. In this paper, the travelling salesman problem and piece-wise linear chaotic map are explored to synthesize an efficient configuration of 8 × 8 substitution-box. The proposed anticipated design has the consistency which is justified by the standard performance indexes. The statistical results manifest that the prospective substitution-box is cryptographically more impressive as compared to some recent investigations.

  16. Anonymous One-Time Broadcast Using Non-interactive Dining Cryptographer Nets with Applications to Voting

    Science.gov (United States)

    van de Graaf, Jeroen

    All voting protocols proposed so far, with the exception of a few, have the property that the privacy of the ballot is only computational. In this paper we outline a new and conceptually simple approach allowing us to construct a protocol in which the privacy of the ballot is unconditional. Our basic idea is to modify the protocol of Fujioka, Okamoto and Ohta[1], which uses blind signatures so that the voter can obtain a valid ballot. However, instead of using a MIX net, we use a new broadcast protocol for anonymously publishing the vote, a Non-Interactive variation of the Dining Cryptographer Net.

  17. MEANING OF THE BITCOIN CRYPTOGRAPHIC CURRENCY AS A MEDIUM OF EXCHANGE

    Directory of Open Access Journals (Sweden)

    Łukasz Dopierała

    2014-06-01

    Full Text Available This article presents one of the new elements of virtual reality, which is the Bitcoin cryptocurrency. This thesis focuses on the condition and perspectives on development of the trading function of this instrument. The authors discuss the legal aspects of functioning of the Bitcoin, conduct a SWOT analysis of this cryptocurrency as a medium of exchange, and examin the scale of use of Bitcoin in transaction purposes. As of March 1, 2014 the trading system gradually develops and the strengths of this cryptographic currency outweigh its weaknesses, but the future of Bitcoin as a medium of exchange is difficult to determine.

  18. Prevention of Cross-Site Scripting Vulnerabilities using Dynamic Hash Generation Technique on the Server Side

    Directory of Open Access Journals (Sweden)

    Shashank Gupta

    2012-09-01

    Full Text Available Cookies are a means to provide statefulcommunication over the HTTP. In the World WideWeb (WWW, once the user using web browser hasbeen successfully authenticated by the web server ofthe web application, then the web server willgenerate and transfer the cookie to the web browser.Now each time, if the user again wants to send arequest to the web server as a part of the activeconnection, the user has to include thecorresponding cookie in its request, so that the webserver associates the cookie to the correspondinguser. Cookies are the mechanisms that maintain anauthentication state between the user and webapplication. Therefore cookies are the possibletargets for the attackers. Cross Site Scripting (XSSattack is one of such attacks against the webapplications in which a user has to compromise itsbrowser’s resources (e.g. cookies etc.. In this paper,a novel technique called Dynamic Hash GenerationTechnique is introduced whose aim is to makecookies worthless for the attackers. This techniqueis implemented on the server side whose main taskis to generate a hash of the value of name attributein the cookie and send this hash value to the webbrowser. With this technique, the hash value ofname attribute in the cookie which is stored on thebrowser’s database is not valid for the attackers toexploit the vulnerabilities of XSS attacks.

  19. Effects of whey and molasses as silage additives on potato hash ...

    African Journals Online (AJOL)

    Effects of whey and molasses as silage additives on potato hash silage ... by higher concentrations of butyric acid, ammonia-N and pH compared to the other silages. ... inclusion level of 20% without any adverse effect on animal performance.

  20. Rebound Attacks on the Reduced Grøstl Hash Function

    DEFF Research Database (Denmark)

    Mendel, Florian; Rechberger, C.; Schlaffer, Martin;

    2010-01-01

    Grøstl is one of 14 second round candidates of the NIST SHA-3 competition. Cryptanalytic results on the wide-pipe compression function of Grøstl-256 have already been published. However, little is known about the hash function, arguably a much more interesting cryptanalytic setting. Also, Grøstl-...

  1. A reliable p ower management scheme for consistent hashing based distributed key value storage systems#

    Institute of Scientific and Technical Information of China (English)

    Nan-nan ZHAO; Ji-guang WAN; Jun WANG; Chang-sheng XIE

    2016-01-01

    Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%–61%.

  2. BCL::EM-Fit: rigid body fitting of atomic structures into density maps using geometric hashing and real space refinement.

    Science.gov (United States)

    Woetzel, Nils; Lindert, Steffen; Stewart, Phoebe L; Meiler, Jens

    2011-09-01

    Cryo-electron microscopy (cryoEM) can visualize large macromolecular assemblies at resolutions often below 10Å and recently as good as 3.8-4.5 Å. These density maps provide important insights into the biological functioning of molecular machineries such as viruses or the ribosome, in particular if atomic-resolution crystal structures or models of individual components of the assembly can be placed into the density map. The present work introduces a novel algorithm termed BCL::EM-Fit that accurately fits atomic-detail structural models into medium resolution density maps. In an initial step, a "geometric hashing" algorithm provides a short list of likely placements. In a follow up Monte Carlo/Metropolis refinement step, the initial placements are optimized by their cross correlation coefficient. The resolution of density maps for a reliable fit was determined to be 10 Å or better using tests with simulated density maps. The algorithm was applied to fitting of capsid proteins into an experimental cryoEM density map of human adenovirus at a resolution of 6.8 and 9.0 Å, and fitting of the GroEL protein at 5.4 Å. In the process, the handedness of the cryoEM density map was unambiguously identified. The BCL::EM-Fit algorithm offers an alternative to the established Fourier/Real space fitting programs. BCL::EM-Fit is free for academic use and available from a web server or as downloadable binary file at http://www.meilerlab.org.

  3. DYNAMIC REQUEST DISPATCHING ALGORITHM FOR WEB SERVER CLUSTER

    Institute of Scientific and Technical Information of China (English)

    Yang Zhenjiang; Zhang Deyun; Sun Qindong; Sun Qing

    2006-01-01

    Distributed architectures support increased load on popular web sites by dispatching client requests transparently among multiple servers in a cluster. Packet Single-Rewriting technology and client address hashing algorithm in ONE-IP technology which can ensure application-session-keep have been analyzed, an improved request dispatching algorithm which is simple, effective and supports dynamic load balance has been proposed. In this algorithm, dispatcher evaluates which server node will process request by applying a hash function to the client IP address and comparing the result with its assigned identifier subset; it adjusts the size of the subset according to the performance and current load of each server, so as to utilize all servers' resource effectively. Simulation shows that the improved algorithm has better performance than the original one.

  4. Cryptographic Protocols:

    DEFF Research Database (Denmark)

    Geisler, Martin Joakim Bittel

    The art of keeping messages secret is ancient. It must have been invented only shortly after the invention of the messages themselves. Merchants and generals have always had a need to exchange critical messages while keeping them secret from the prying eyes of competitors or the enemy. Classical...... framework. We call this framework VIFF, short for Virtual Ideal Functionality Framework. VIFF implements a UC functionality for general multiparty computation on asynchronous networks. We give a formal definition of the functionality in Chapter 3. There we also describe how we implemented the functionality...

  5. Cryptographic Protocols:

    DEFF Research Database (Denmark)

    Geisler, Martin Joakim Bittel

    framework. We call this framework VIFF, short for Virtual Ideal Functionality Framework. VIFF implements a UC functionality for general multiparty computation on asynchronous networks. We give a formal definition of the functionality in Chapter 3. There we also describe how we implemented the functionality...

  6. Data synchronization algorithm controlled by the intelligent terminal%一种由智能终端控制的数据同步算法

    Institute of Scientific and Technical Information of China (English)

    李立亚; 胡晓红; 辛振国

    2013-01-01

    To integrate different vendors' cloud storage services, we put forward a data synchronization algorithm based on HASH fingerprint information, which combines strictly consistent data synchronization and weak consistent data synchronization. Intelligent terminal controled data synchronization calculation maps HASH fingerprint information into HASH fingerprint information file which is independent of the needed synchronous data. The algorithm is used to calculate the hash code for filename and file content. First we match the filename's hash code and then the file content, which strategy can reduce the requirement for the hash algorithm. The simulation experiment uses the improved BKDR hash algorithm, the results achieve 20000 order-of-magnitude file repeatability test and verify that the data synchronization algorithm can be implemented independently by the intelligent terminal, instead of relying on the server, and can integrate different vendors' cloud storage services.%提出了基于HASH指纹信息的数据同步算法,将数据严格一致同步和弱一致同步策略整合在一个算法中.数据同步的计算由智能终端控制,将HASH指纹信息映射至HASH指纹信息文件,与同步数据独立.计算完文件名和文件内容的哈希码后,先匹配文件名再匹配文件内容的哈希码,降低了对哈希算法的要求.仿真实验使用改进的BKDR哈希算法,实现了20 000数量级的文件重复性检测,验证了该数据同步算法可以由智能终端独立实施,而不依赖服务器,可以整合不同厂商的云存储服务.

  7. 76 FR 7817 - Announcing Draft Federal Information Processing Standard 180-4, Secure Hash Standard, and Request...

    Science.gov (United States)

    2011-02-11

    ... National Institute of Standards and Technology Announcing Draft Federal Information Processing Standard 180... Draft Federal Information Processing Standard (FIPS) 180-4, Secure Hash Standard (SHS), for public... sent to: Chief, Computer Security Division, Information Technology Laboratory, Attention: Comments...

  8. Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification.

    Science.gov (United States)

    Zhang, Ruimao; Lin, Liang; Zhang, Rui; Zuo, Wangmeng; Zhang, Lei

    2015-12-01

    Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.

  9. A Fast Generic Sequence Matching Algorithm

    CERN Document Server

    Musser, David R

    2008-01-01

    A string matching -- and more generally, sequence matching -- algorithm is presented that has a linear worst-case computing time bound, a low worst-case bound on the number of comparisons (2n), and sublinear average-case behavior that is better than that of the fastest versions of the Boyer-Moore algorithm. The algorithm retains its efficiency advantages in a wide variety of sequence matching problems of practical interest, including traditional string matching; large-alphabet problems (as in Unicode strings); and small-alphabet, long-pattern problems (as in DNA searches). Since it is expressed as a generic algorithm for searching in sequences over an arbitrary type T, it is well suited for use in generic software libraries such as the C++ Standard Template Library. The algorithm was obtained by adding to the Knuth-Morris-Pratt algorithm one of the pattern-shifting techniques from the Boyer-Moore algorithm, with provision for use of hashing in this technique. In situations in which a hash function or random a...

  10. The SAT solving method as applied to cryptographic analysis of asymmetric ciphers

    CERN Document Server

    Faizullin, R T; Dylkeyt, V I

    2009-01-01

    The one of the most interesting problem of discrete mathematics is the SAT (satisfiability) problem. Good way in SAT solver developing is to transform the SAT problem to the problem of continuous search of global minimums of the functional associated with the CNF. This article proves the special construction of the functional and offers to solve the system of non-linear algebraic equation that determines functional stationary points via modified method of consecutive approximation. The article describes parallel versions of the method. Also gives the schema of using the method to important problems of cryptographic analysis of asymmetric ciphers, including determining concrete bits of multipliers (in binary form) in large factorization problems and concrete bits of exponent of discrete logarithm problem.

  11. Secure Cryptographic Key Management System (CKMS) Considerations for Smart Grid Devices

    Energy Technology Data Exchange (ETDEWEB)

    Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Aldridge, Hal [ORNL; Duren, Mike [Sypris Electronics, LLC; Ricci, Tracy [Sypris Electronics, LLC; Bertino, Elisa [ORNL; Kulatunga, Athula [Purdue University; Navaratne, Uditha Sudheera [Purdue University

    2011-01-01

    In this paper, we examine some unique challenges associated with key management in the Smart Grid and concomitant research initiatives: 1) effectively model security requirements and their implementations, and 2) manage keys and key distribution for very large scale deployments such as Smart Meters over a long period of performance. This will set the stage to: 3) develop innovative, low cost methods to protect keying material, and 4) provide high assurance authentication services. We will present our perspective on key management and will discuss some key issues within the life cycle of a cryptographic key designed to achieve the following: 1) control systems designed, installed, operated, and maintained to survive an intentional cyber assault with no loss of critical function, and 2) widespread implementation of methods for secure communication between remote access devices and control centers that are scalable and cost-effective to deploy.

  12. Efficient Implementation of Electronic Passport Scheme Using Cryptographic Security Along With Multiple Biometrics

    Directory of Open Access Journals (Sweden)

    V.K. NARENDIRA KUMAR

    2012-02-01

    Full Text Available Electronic passports have known a wide and fast deployment all around the world since the International Civil Aviation Organization the world has adopted standards whereby passports can store biometric identifiers. The use of biometrics for identification has the potential to make the lives easier, and the world people live in a safer place. The purpose of biometric passports is to prevent the illegal entry of traveler into a specific country and limit the use of counterfeit documents by more accurate identification of an individual. The paper analyses the face, fingerprint, palm print and iris biometric e-passport design. The paper also provides a cryptographic security analysis of the e-passport using face fingerprint, palm print and iris biometric that are intended to provide improved security in protecting biometric information of the e-passport bearer.

  13. 基于C#的HASH算法探析%Hash Algorithms Research Based on C# Language

    Institute of Scientific and Technical Information of China (English)

    杜玉兰; 赵磊

    2007-01-01

    Hash算法长期以来一直在计算机科学中大量应用,在现代信息安全领域中有着重要的地位..Net Framework在统一的System.Security.Cryptography命名空间中封装了多种与加密算法相关的类,C#是.Net开发平台的一种主要语言,该文对其中的Hash算法类及其在C#中的应用实现进行了比较详细的介绍.

  14. 一种Hash高速分词算法%Fast Hash Algorithm for Chinese Word Segmentation

    Institute of Scientific and Technical Information of China (English)

    李向阳; 张亚非

    2004-01-01

    对于基于词的搜索引擎等中文处理系统,分词速度要求较高.设计了一种高效的中文电子词表的数据结构,它支持首字和词的Hash查找.提出了一种Hash高速分词算法,理论分析表明,其平均匹配次数低于1.08,优于目前的同类算法.

  15. A novel perceptual image Hash algorithm%一种新的视觉Hash算法

    Institute of Scientific and Technical Information of China (English)

    赵玉鑫; 刘光杰; 戴跃伟; 王执铨

    2008-01-01

    提出了一种新的视觉Hash方案.利用密钥提取图像鲁棒特征,并对特征进行Hash运算.通过在密钥的生成过程中引入图像独特的光学特性,增强Hash的安全性,实验结果表明,本方案对JPEG压缩、滤波、噪声等处理具有良好的鲁棒性,Hash误比特率低于0.01,而对图像的恶意篡改有较强的敏感性,误比特率在0.5左右.本方案可用于面向对象的真实性鲁棒认证.

  16. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    Directory of Open Access Journals (Sweden)

    Michael S. Warren

    2014-01-01

    Full Text Available We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT. A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (218 processors. We present error analysis and scientific application results from a series of more than ten 69 billion (40963 particle cosmological simulations, accounting for 4×1020 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.

  17. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    OpenAIRE

    Warren, Michael S.

    2014-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (218) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (40963) particle cosmological simulations, accounting for 4×1020 floating point operations. These results include the first simul...

  18. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    OpenAIRE

    Warren, Michael S.

    2013-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k ($2^{18}$) processors. We present error analysis and scientific application results from a series of more than ten 69 billion ($4096^3$) particle cosmological simulations, accounting for $4 \\times 10^{20}$ floating point operations. These results inc...

  19. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    CERN Document Server

    Warren, Michael S

    2013-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k ($2^{18}$) processors. We present error analysis and scientific application results from a series of more than ten 69 billion ($4096^3$) particle cosmological simulations, accounting for $4 \\times 10^{20}$ floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.

  20. Video perceptual hashing fuse computational model of human visual system%融合HVS计算模型的视频感知哈希算法研究

    Institute of Scientific and Technical Information of China (English)

    欧阳杰; 高金花; 文振焜; 张盟; 刘朋飞; 杜以华

    2011-01-01

    Perceptual hashing is a function of mapping from multimedia digital presentations to a perceptual hash value, which provides a secure and reliable technical support in fields such as identification, retrieval, and certification of multimedia content. The current algorithms fail in taking sufficient human visual perceptual factors into consideration. With the improvement of their over-robustness, most of the algorithms can' t assure their securities. In this paper, a novel perceptual hashing algorithm is proposed. In order to simulate multi-channel features of the human visual system, a cortex transformation is combined with a computational model of the human visual system, which is designed by jointly considering four visual perceptual factors during the feature extraction stage, such as spatio-temporal contrast sensitivity function, eye movement, lightness adaptation, and intra-band and inter-band masking. Additionally, a diffusion mechanism is introduced into the preprocessing stage. The results suggest our proposed method could achieve better trade-offs between robust and secure resilient to various content-preserving manipulations, and also reflects the uniformity between subjective perception and objective evaluation.%感知哈希(perceptual hashing)是多媒体数据集到摘要集的单向映射,为多媒体数字内容的标识、检索、认证等应用提供了安全可靠的技术支撑.目前关于感知哈希算法的研究主要集中在不断提高其鲁棒性和安全性上,忽略了人的主要视觉感知特性,导致了算法的过鲁棒性问题.将人类视觉系统可计算模型融入视频感知哈希算法框架中,用模拟人眼感受野特征提取特性的Cortex变换进行通道分解,并使用时-空域对比度敏感函数、眼球移动函数、亮度适应性调整函数、子带内和子带间对比度掩蔽函数综合计算最小视觉差提取感知特征.在保证较好鲁棒性的前提下,算法中使用扩散分块的机

  1. Embedded Platform for Automatic Testing and Optimizing of FPGA Based Cryptographic True Random Number Generators

    Directory of Open Access Journals (Sweden)

    M. Varchola

    2009-12-01

    Full Text Available This paper deals with an evaluation platform for cryptographic True Random Number Generators (TRNGs based on the hardware implementation of statistical tests for FPGAs. It was developed in order to provide an automatic tool that helps to speed up the TRNG design process and can provide new insights on the TRNG behavior as it will be shown on a particular example in the paper. It enables to test sufficient statistical properties of various TRNG designs under various working conditions on the fly. Moreover, the tests are suitable to be embedded into cryptographic hardware products in order to recognize TRNG output of weak quality and thus increase its robustness and reliability. Tests are fully compatible with the FIPS 140 standard and are implemented by the VHDL language as an IP-Core for vendor independent FPGAs. A recent Flash based Actel Fusion FPGA was chosen for preliminary experiments. The Actel version of the tests possesses an interface to the Actel’s CoreMP7 softcore processor that is fully compatible with the industry standard ARM7TDMI. Moreover, identical tests suite was implemented to the Xilinx Virtex 2 and 5 in order to compare the performance of the proposed solution with the performance of already published one based on the same FPGAs. It was achieved 25% and 65% greater clock frequency respectively while consuming almost equal resources of the Xilinx FPGAs. On the top of it, the proposed FIPS 140 architecture is capable of processing one random bit per one clock cycle which results in 311.5 Mbps throughput for Virtex 5 FPGA.

  2. Hash-and-Forward Relaying for Two-Way Relay Channel

    CERN Document Server

    Yilmaz, Erhan

    2011-01-01

    This paper considers a communication network comprised of two nodes, which have no mutual direct communication links, communicating two-way with the aid of a common relay node (RN), also known as separated two-way relay (TWR) channel. We first recall a cut-set outer bound for the set of rates in the context of this network topology assuming full-duplex transmission capabilities. Then, we derive a new achievable rate region based on hash-and-forward (HF) relaying where the RN does not attempt to decode but instead hashes its received signal, and show that under certain channel conditions it coincides with Shannon's inner-bound for the two-way channel [1]. Moreover, for binary adder TWR channel with additive noise at the nodes and the RN we provide a detailed capacity achieving coding scheme based on structure codes.

  3. MapReduce Based Personalized Locality Sensitive Hashing for Similarity Joins on Large Scale Data.

    Science.gov (United States)

    Wang, Jingjing; Lin, Chen

    2015-01-01

    Locality Sensitive Hashing (LSH) has been proposed as an efficient technique for similarity joins for high dimensional data. The efficiency and approximation rate of LSH depend on the number of generated false positive instances and false negative instances. In many domains, reducing the number of false positives is crucial. Furthermore, in some application scenarios, balancing false positives and false negatives is favored. To address these problems, in this paper we propose Personalized Locality Sensitive Hashing (PLSH), where a new banding scheme is embedded to tailor the number of false positives, false negatives, and the sum of both. PLSH is implemented in parallel using MapReduce framework to deal with similarity joins on large scale data. Experimental studies on real and simulated data verify the efficiency and effectiveness of our proposed PLSH technique, compared with state-of-the-art methods.

  4. HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database

    Science.gov (United States)

    Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.

    2016-07-01

    By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.

  5. HASH: the Hong Kong/AAO/Strasbourg H-alpha planetary nebula database

    CERN Document Server

    Parker, Quentin A; Frew, David J

    2016-01-01

    By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic PNe. We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.

  6. Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.

    Science.gov (United States)

    Berlin, Konstantin; Koren, Sergey; Chin, Chen-Shan; Drake, James P; Landolin, Jane M; Phillippy, Adam M

    2015-06-01

    Long-read, single-molecule real-time (SMRT) sequencing is routinely used to finish microbial genomes, but available assembly methods have not scaled well to larger genomes. We introduce the MinHash Alignment Process (MHAP) for overlapping noisy, long reads using probabilistic, locality-sensitive hashing. Integrating MHAP with the Celera Assembler enabled reference-grade de novo assemblies of Saccharomyces cerevisiae, Arabidopsis thaliana, Drosophila melanogaster and a human hydatidiform mole cell line (CHM1) from SMRT sequencing. The resulting assemblies are highly continuous, include fully resolved chromosome arms and close persistent gaps in these reference genomes. Our assembly of D. melanogaster revealed previously unknown heterochromatic and telomeric transition sequences, and we assembled low-complexity sequences from CHM1 that fill gaps in the human GRCh38 reference. Using MHAP and the Celera Assembler, single-molecule sequencing can produce de novo near-complete eukaryotic assemblies that are 99.99% accurate when compared with available reference genomes.

  7. ID-based authentication scheme combined with identity-based encryption with fingerprint hashing

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Current identity-based (ID) cryptosystem lacks the mechanisms of two-party authentication and user's private key distribution. Some ID-based signcryption schemes and ID-based authenticated key agreement protocols have been presented, but they cannot solve the problem completely. A novel ID-based authentication scheme based on ID-based encryption (IBE) and fingerprint hashing method is proposed to solve the difficulties in the IBE scheme, which includes message receiver authenticating the sender, the trusted authority (TA) authenticating the users and transmitting the private key to them. Furthermore, the scheme extends the application of fingerprint authentication from terminal to network and protects against fingerprint data fabrication. The fingerprint authentication method consists of two factors. This method combines a token key, for example, the USB key, with the user's fingerprint hash by mixing a pseudo-random number with the fingerprint feature. The security and experimental efficiency meet the requirements of practical applications.

  8. ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function

    Directory of Open Access Journals (Sweden)

    Robson da Silva Lopes

    2015-01-01

    primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multiFASTA file, while allowing a linear time complexity.

  9. A Survey of RFID Authentication Protocols Based on Hash-Chain Method

    CERN Document Server

    Syamsuddin, Irfan; Chang, Elizabeth; Han, Song; 10.1109/ICCIT.2008.314

    2010-01-01

    Security and privacy are the inherent problems in RFID communications. There are several protocols have been proposed to overcome those problems. Hash chain is commonly employed by the protocols to improve security and privacy for RFID authentication. Although the protocols able to provide specific solution for RFID security and privacy problems, they fail to provide integrated solution. This article is a survey to closely observe those protocols in terms of its focus and limitations.

  10. Simulation-Based Performance Evaluation of Predictive-Hashing Based Multicast Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Seonho Choi

    2012-12-01

    Full Text Available A predictive-hashing based Denial-of-Service (DoS resistant multicast authentication protocol was proposed based upon predictive-hashing, one-way key chain, erasure codes, and distillation codes techniques [4, 5]. It was claimed that this new scheme should be more resistant to various types of DoS attacks, and its worst-case resource requirements were derived in terms of coarse-level system parameters including CPU times for signature verification and erasure/distillation decoding operations, attack levels, etc. To show the effectiveness of our approach and to analyze exact resource requirements in various attack scenarios with different parameter settings, we designed and implemented an attack simulator which is platformindependent. Various attack scenarios may be created with different attack types and parameters against a receiver equipped with the predictive-hashing based protocol. The design of the simulator is explained, and the simulation results are presented with detailed resource usage statistics. In addition, resistance level to various types of DoS attacks is formulated with a newly defined resistance metric. By comparing these results to those from another approach, PRABS [8], we show that the resistance level of our protocol is greatly enhanced even in the presence of many attack streams.

  11. Wave-atoms-based multipurpose scheme via perceptual image hashing and watermarking.

    Science.gov (United States)

    Liu, Fang; Fu, Qi-Kai; Cheng, Lee-Ming

    2012-09-20

    This paper presents a novel multipurpose scheme for content-based image authentication and copyright protection using a perceptual image hashing and watermarking strategy based on a wave atom transform. The wave atom transform is expected to outperform other transforms because it gains sparser expansion and better representation for texture than other traditional transforms, such as wavelet and curvelet transforms. Images are decomposed into multiscale bands with a number of tilings using the wave atom transform. Perceptual hashes are then extracted from the features of tiling in the third scale band for the purpose of content-based authentication; simultaneously, part of the selected hashes are designed as watermarks, which are embedded into the original images for the purpose of copyright protection. The experimental results demonstrate that the proposed scheme shows great performance in content-based authentication by distinguishing the maliciously attacked images from the nonmaliciously attacked images. Moreover, watermarks extracted from the proposed scheme also achieve high robustness against common malicious and nonmalicious image-processing attacks, which provides excellent copyright protection for images.

  12. Vulnerability of advanced encryption standard algorithm to differential power analysis attacks implemented on ATmega-128 microcontroller

    CSIR Research Space (South Africa)

    Mpalane, Kealeboga

    2016-09-01

    Full Text Available encryption standard(AES) cryptographic algorithm implementation in a microcontroller crypto-device against differential power analysis (DPA) attacks. ChipWhisperer capture hardware Rev2 tool was used to collect 1000 power traces for DPA. We observed...

  13. 分组一致性哈希数据分割方法%Data segmentation method of grouping consistent hash

    Institute of Scientific and Technical Information of China (English)

    武小年; 方堃; 杨宇洋

    2016-01-01

    针对分布式入侵检测系统进行数据分割时面临的数据完整性和负载均衡问题,提出一种分组一致性哈希数据分割方法.采用TCP流重组技术保证数据的完整性;在对数据进行分割时,采用改进的分组一致性哈希算法,将具有相近计算能力的结点分为一组,根据组的计算能力,将各组按比例交替映射到整个数据哈希值计算对应的空间;在数据分配时,对结点的负载进行检测和动态调整.仿真测试结果表明,该方法具有较高的检测率,算法所需虚拟结点数量减少,降低了内存占用,提高了系统的负载均衡性.%Aiming at data integrity and load balance problem of distributed intrusion detection system for data segmentation,a data segmentation method of grouping consistent hash was proposed.TCP stream reassembly technology was used to ensure data integrity.When data were divided,the improved grouping consistent hash algorithm was used,and nodes with similar computing capacity were divided into a group.According to the computing capacity of group and proportion,each group was mapped alter-nately in the space corresponding to the hash value calculated.And the load of the node was detected and dynamically adj usted when data were assigned to the node.Results of simulation show that,the method has higher detection rate,the number of re-quired virtual node and the memory consumption are reduced,and load balance of the system is improved.

  14. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  15. Reducing Computational Time of Basic Encryption and Authentication Algorithms

    Directory of Open Access Journals (Sweden)

    Sandeep Kumar,

    2011-04-01

    Full Text Available Today most of data are sending via the internet for sharing, so the trust of data files is decreased. For the trust more security and authentication is needed, less security increase the liability ofattacks on data. Digital signature of the data is a solution to this security problem which provides the reliability, authenticity and accuracy. Most basic algorithm for security and authentication is RSA, DSA, algorithms which uses the different key of different sizes. This work presents ECDSA algorithm to encrypt the data, use parameterized hash algorithm to authenticate the data and also compare both RSA and ECDSA methods in respect of time parameters.

  16. An Efficient Hybrid Algorithm for Mining Web Frequent Access Patterns

    Institute of Scientific and Technical Information of China (English)

    ZHAN Li-qiang; LIU Da-xin

    2004-01-01

    We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns.WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database.Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree.The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.

  17. Fast Continuous Weak Hashes in Strings and Its Applications%串的快速连续弱哈希及其应用

    Institute of Scientific and Technical Information of China (English)

    徐泽明; 侯紫峰

    2011-01-01

    提出串的快速连续弱哈希(fast continuous weak Hash,简称FCWH),并研究它在理论和工程上的应用.首先提出FCWH的概念,从代数结构角度统一规划该类哈希的构造框架;然后对哈希冲突概率进行理论分析和实验数据分析,推广并加强了Rabin的相关工作;最后,通过推广串匹配的Karp-Rabin算法,应用FCWH解决顺序抽取公共子串问题(sequential extraction of common substrings,简称SECS),并据此设计快速同步协议X-Sync来解决当今宽带网络和云计算环境下文档多版本内容的实时备份检索.%In this paper, the fast continuous weak Hash (FCWH) in strings is proposed and its theoretic and practical applications are investigated.First, FCWH is conceptualized and a uniform construction framework for FCWH is formulized from an algebraic viewpoint.Secondly, the theoretical and experimental collision probabilities of FCWH are analyzed, and the related work by Michael O.Rabin is generalized and strengthened.Finally, by generalizing the Karp-Rabin algorithm for string-matching problem, FCWH is applied to solve the problem of sequential extraction of common substrings (SECS), and based on SECS, the express synchronization (X-Sync)protocol is designed to address the issue of real-time backup and the retrieval of multiple versions of a given document in the current environment of broadband communication network and cloud computing.

  18. Dynamic DNS update security, based on cryptographically generated addresses and ID-based cryptography, in an IPv6 autoconfiguration context

    OpenAIRE

    Combes, Jean-Michel; Arfaoui, Ghada; LAURENT, Maryline

    2012-01-01

    International audience; This paper proposes a new security method for protecting signalling for Domain Name System (DNS) architecture. That is, it makes secure DNS update messages for binding a Fully Qualified Domain Name (FQDN) of an IPv6 node and the IPv6 address of the node owning this FQDN. This method is based on the use of Cryptographically Generated Addresses (CGA) and IDBased Cryptography (IBC). Combination of these two techniques allows DNS server to check the ownership of the IPv6 a...

  19. Novel Authentication of Monitoring Data Through the use of Secret and Public Cryptographic Keys

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Jacob M.; Tolk, Keith; Tanner, Jennifer E.

    2014-07-21

    The Office of Nuclear Verification (ONV) is supporting the development of a piece of equipment to provide data authentication and protection for a suite of monitoring sensors as part of a larger effort to create an arms control technology toolkit. This device, currently called the Red Box, leverages the strengths of both secret and public cryptographic keys to authenticate, digitally sign, and pass along monitoring data to allow for host review, and redaction if necessary, without the loss of confidence in the authenticity of the data by the monitoring party. The design of the Red Box will allow for the addition and removal of monitoring equipment and can also verify that the data was collected by authentic monitoring equipment prior to signing the data and sending it to the host and for review. The host will then forward the data to the monitor for review and inspection. This paper will highlight the progress to date of the Red Box development, and will explain the novel method of leveraging both symmetric and asymmetric (secret and public key) cryptography to authenticate data within a warhead monitoring regime.

  20. A Cryptographic SoC for Robust Protection of Secret Keys in IPTV DRM Systems

    Science.gov (United States)

    Lee, Sanghan; Yang, Hae-Yong; Yeom, Yongjin; Park, Jongsik

    The security level of an internet protocol television (IPTV) digital right management (DRM) system ultimately relies on protection of secret keys. Well known devices for the key protection include smartcards and battery backup SRAMs (BB-SRAMs); however, these devices could be vulnerable to various physical attacks. In this paper, we propose a secure and cost-effective design of a cryptographic system on chip (SoC) that integrates the BB-SRAM with a cell-based design technique. The proposed SoC provides robust safeguard against the physical attacks, and satisfies high-speed and low-price requirements of IPTV set-top boxes. Our implementation results show that the maximum encryption rate of the SoC is 633Mb/s. In order to verify the data retention capabilities, we made a prototype chip using 0.18µm standard cell technology. The experimental results show that the integrated BB-SRAM can reliably retain data with a 1.4µA leakage current.