WorldWideScience

Sample records for imply collision-free hash

  1. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash...... important applications has also been analysed. This successful cryptanalysis of the standard hash functions has made National Institute of Standards and Technology (NIST), USA to initiate an international public competition to select the most secure and efficient hash function as the future hash function...... based MACs are reported. The goals of NIST's SHA-3 competition and its current progress are outlined....

  2. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash...

  3. Perceptual Audio Hashing Functions

    Directory of Open Access Journals (Sweden)

    Emin Anarım

    2005-07-01

    Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  4. Universally composable anonymous Hash certification model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; MA JianFeng; SangJae MOON

    2007-01-01

    Ideal function is the fundamental component in the universally composable security model. However, the certification ideal function defined in the universally composable security model realizes the identity authentication by binding identity to messages and the signature, which fails to characterize the special security requirements of anonymous authentication with other kind of certificate. Therefore,inspired by the work of Marten, an anonymous hash certification ideal function and a more universal certificate CA model are proposed in this paper. We define the security requirements and security notions for this model in the framework of universal composable security and prove in the plain model (not in the random-oracle model) that these security notions can be achieved using combinations of a secure digital signature scheme, a symmetrical encryption mechanism, a family of pseudorandom functions, and a family of one-way collision-free hash functions. Considering the limitation of wireless environment and computation ability of wireless devices, this anonymous Hash certification ideal function is realized by using symmetry primitives.

  5. Density Sensitive Hashing

    CERN Document Server

    Lin, Yue; Li, Cheng

    2012-01-01

    Nearest neighbors search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, e.g., Locality Sensitive Hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbors search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called {\\em Density Sensitive Hashing} (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better ...

  6. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Thomsen, Søren Steffen

    2009-01-01

    Cryptographic hash functions are commonly used in many different areas of cryptography: in digital signatures and in public-key cryptography, for password protection and message authentication, in key derivation functions, in pseudo-random number generators, etc. Recently, cryptographic hash...... well-known designs, and also some design and cryptanalysis in which the author took part. The latter includes a construction method for hash functions and four designs, of which one was submitted to the SHA-3 hash function competition, initiated by the U.S. standardisation body NIST. It also includes...

  7. Density sensitive hashing.

    Science.gov (United States)

    Jin, Zhongming; Li, Cheng; Lin, Yue; Cai, Deng

    2014-08-01

    Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.

  8. The hash function BLAKE

    CERN Document Server

    Aumasson, Jean-Philippe; Phan, Raphael; Henzen, Luca

    2014-01-01

    This is a comprehensive description of the cryptographic hash function BLAKE, one of the five final contenders in the NIST SHA3 competition, and of BLAKE2, an improved version popular among developers. It describes how BLAKE was designed and why BLAKE2 was developed, and it offers guidelines on implementing and using BLAKE, with a focus on software implementation.   In the first two chapters, the authors offer a short introduction to cryptographic hashing, the SHA3 competition, and BLAKE. They review applications of cryptographic hashing, they describe some basic notions such as security de

  9. GB-hash : Hash Functions Using Groebner Basis

    CERN Document Server

    Dey, Dhananjoy; Sengupta, Indranath

    2010-01-01

    In this paper we present an improved version of HF-hash, viz., GB-hash : Hash Functions Using Groebner Basis. In case of HF-hash, the compression function consists of 32 polynomials with 64 variables which were taken from the first 32 polynomials of hidden field equations challenge-1 by forcing last 16 variables as 0. In GB-hash we have designed the compression function in such way that these 32 polynomials with 64 variables form a minimal Groebner basis of the ideal generated by them with respect to graded lexicographical (grlex) ordering as well as with respect to graded reverse lexicographical (grevlex) ordering. In this paper we will prove that GB-hash is more secure than HF-hash as well as more secure than SHA-256. We have also compared the efficiency of our GB-hash with SHA-256 and HF-hash.

  10. HF-hash : Hash Functions Using Restricted HFE Challenge-1

    CERN Document Server

    Dey, Dhananjoy; Gupta, Indranath Sen

    2009-01-01

    Vulnerability of dedicated hash functions to various attacks has made the task of designing hash function much more challenging. This provides us a strong motivation to design a new cryptographic hash function viz. HF-hash. This is a hash function, whose compression function is designed by using first 32 polynomials of HFE Challenge-1 with 64 variables by forcing remaining 16 variables as zero. HF-hash gives 256 bits message digest and is as efficient as SHA-256. It is secure against the differential attack proposed by Chabaud and Joux as well as by Wang et. al. applied to SHA-0 and SHA-1.

  11. Sparse Hashing Tracking.

    Science.gov (United States)

    Zhang, Lihe; Lu, Huchuan; Du, Dandan; Liu, Luning

    2016-02-01

    In this paper, we propose a novel tracking framework based on a sparse and discriminative hashing method. Different from the previous work, we treat object tracking as an approximate nearest neighbor searching process in a binary space. Using the hash functions, the target templates and the candidates can be projected into the Hamming space, facilitating the distance calculation and tracking efficiency. First, we integrate both the inter-class and intra-class information to train multiple hash functions for better classification, while most classifiers in previous tracking methods usually neglect the inter-class correlation, which may cause the inaccuracy. Then, we introduce sparsity into the hash coefficient vectors for dynamic feature selection, which is crucial to select the discriminative and stable features to adapt to visual variations during the tracking process. Extensive experiments on various challenging sequences show that the proposed algorithm performs favorably against the state-of-the-art methods.

  12. The Grindahl Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen

    2007-01-01

    In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access to the st......In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access...... to the state. We propose two concrete hash functions, Grindahl-256 and Grindahl-512 with claimed security levels with respect to collision, preimage and second preimage attacks of 2^128 and 2^256, respectively. Both proposals have lower memory requirements than other hash functions at comparable speeds...

  13. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke;

    2014-01-01

    , can be easily made cache-oblivious but it only achieves t q =1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547–558, 2011) achieves t q =1+1/2 Ω(b), thus matching the cache-aware bound, if the following two......The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1......+1/2 Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works...

  14. Collision-free speed model for pedestrian dynamics

    CERN Document Server

    Tordeux, Antoine; Seyfried, Armin

    2015-01-01

    We propose in this paper a minimal speed-based pedestrian model for which particle dynamics are intrinsically collision-free. The speed model is an optimal velocity function depending on the agent length (i.e.\\ particle diameter), maximum speed and time gap parameters. The direction model is a weighted sum of exponential repulsion from the neighbors, calibrated by the repulsion rate and distance. The model's main features like the reproduction of empirical phenomena are analysed by simulation. We point out that phenomena of self-organisation observable in force-based models and field studies can be reproduced by the collision-free model with low computational effort.

  15. On hash functions using checksums

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John; Knudsen, Lars Ramkilde

    2010-01-01

    We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage...... attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto'04. We also apply our...... attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash...

  16. Cryptographic Hash functions - a review

    Directory of Open Access Journals (Sweden)

    Rajeev Sobti

    2012-03-01

    Full Text Available Cryptographic Hash functions are used to achieve a number of security objectives. In this paper, we bring out the importance of hash functions, its various structures, design techniques, attacks and the progressive recent development in this field.

  17. Hashing on nonlinear manifolds.

    Science.gov (United States)

    Shen, Fumin; Shen, Chunhua; Shi, Qinfeng; van den Hengel, Anton; Tang, Zhenmin; Shen, Heng Tao

    2015-06-01

    Learning-based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes preserving the Euclidean similarity in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexities of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this paper, how to learn compact binary embeddings on their intrinsic manifolds is considered. In order to address the above-mentioned difficulties, an efficient, inductive solution to the out-of-sample data problem, and a process by which nonparametric manifold learning may be used as the basis of a hashing method are proposed. The proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. It is particularly shown that hashing on the basis of t-distributed stochastic neighbor embedding outperforms state-of-the-art hashing methods on large-scale benchmark data sets, and is very effective for image classification with very short code lengths. It is shown that the proposed framework can be further improved, for example, by minimizing the quantization error with learned orthogonal rotations without much computation overhead. In addition, a supervised inductive manifold hashing framework is developed by incorporating the label information, which is shown to greatly advance the semantic retrieval performance.

  18. On hash functions using checksums

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John; Knudsen, Lars Ramkilde

    2008-01-01

    We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage...... attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno, and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto'04. We also apply...... our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least $2^{26}$ and $2^{54}$, respectively....

  19. The Grindahl Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen

    2007-01-01

    In this paper we propose the Grindahl hash functions, which are based on components of the Rijndael algorithm. To make collision search sufficiently difficult, this design has the important feature that no low-weight characteristics form collisions, and at the same time it limits access to the st...

  20. Cuckoo Hashing with Pages

    CERN Document Server

    Dietzfelbinger, Martin; Rink, Michael

    2011-01-01

    Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.

  1. Collision-free path planning in multi-dimensional environments

    Directory of Open Access Journals (Sweden)

    Edwin Francis Cárdenas

    2011-05-01

    Full Text Available  Reliable path-planning and generation of collision-free trajectories has become an area of active research over the past decade where the field robotics has probably been the most active area. This paper's main objective is to analyse the advantages and disadvantages of two of the most popular techniques used in collision-free trajectory generation in n-dimensional spaces. The importance of analysing such techniques within a generalised framework is evident as path-planning is used in a variety of fields such as designing medical drugs, computer animation and artificial intelligence and, of course, robotics. The review provided in this paper starts by drawing a historical map of path-planning and the techniques used in its early stages. The main concepts involved in artificial potential fields and probabilistic roadmaps will be addressed as these are the most influential methods and have been widely used in specialised literature. 

  2. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    OpenAIRE

    Marwah Almasri; Khaled Elleithy; Abrar Alajlan

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot...

  3. Hardware design for Hash functions

    Science.gov (United States)

    Lee, Yong Ki; Knežević, Miroslav; Verbauwhede, Ingrid M. R.

    Due to its cryptographic and operational key features such as the one-way function property, high speed and a fixed output size independent of input size the hash algorithm is one of the most important cryptographic primitives. A critical drawback of most cryptographic algorithms is the large computational overhead. This is getting more critical since the data amount to process or communicate is increasing a lot. In many cases, a proper use of the hash algorithm reduces the computational overhead. Digital signature generation and the message authentication are the most common applications of the hash algorithms. The increasing data size also motivates hardware designers to have a throughput optimal architecture for a given hash algorithm. In this chapter, some popular hash algorithms and their cryptanalysis are briefly introduced, and a design methodology for throughput optimal architectures of MD4-based hash algorithms is described in detail.

  4. Proposals for iterated hash functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2006-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  5. Proposals for iterated hash functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2006-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  6. Proposals for Iterated Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  7. Proposals for Iterated Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  8. DEVELOPMENT AND IMPLEMENTATION OF HASH FUNCTION FOR GENERATING HASHED MESSAGE

    Directory of Open Access Journals (Sweden)

    Amir Ghaeedi

    2016-09-01

    Full Text Available Steganography is a method of sending confidential information in a way that the existence of the channel in this communication remains secret. A collaborative approach between steganography and digital signature provides a high secure hidden data. Unfortunately, there are wide varieties of attacks that affect the quality of image steganography. Two issues that required to be addressed are large size of the ciphered data in digital signature and high bandwidth. The aim of the research is to propose a new method for producing a dynamic hashed message algorithm in digital signature and then embedded into image for enhancing robustness of image steganography with reduced bandwidth. A digital signature with smaller hash size than other hash algorithms was developed for authentication purposes. A hash function is used in the digital signature generation. The encoder function encoded the hashed message to generate the digital signature and then embedded into an image as a stego-image. In enhancing the robustness of the digital signature, we compressed or encoded it or performed both operations before embedding the data into the image. This encryption algorithm is also computationally efficient whereby for messages with the sizes less than 1600 bytes, the hashed file reduced the original file up to 8.51%.

  9. Hashing, Randomness and Dictionaries

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    time and memory space. To some extent we also consider lower bounds, i.e., we attempt to show limitations on how efficient algorithms are possible. A central theme in the thesis is randomness. Randomized algorithms play an important role, in particular through the key technique of hashing. Additionally...... algorithms community. We work (almost) exclusively with a model, a mathematical object that is meant to capture essential aspects of a real computer. The main model considered here (and in most of the literature on dictionaries) is a unit cost RAM with a word size that allows a set element to be stored...... in one word. We consider several variants of the dictionary problem, as well as some related problems. The problems are studied mainly from an upper bound perspective, i.e., we try to come up with algorithms that are as efficient as possible with respect to various computing resources, mainly computation...

  10. On the Security of Multivariate Hash Functions

    Institute of Scientific and Technical Information of China (English)

    LUO Yi-yuan; LAI Xue-jia

    2009-01-01

    Multivariate hash functions are a type of hash functions whose compression function is explicitly defined as a sequence of multivariate equations. Billet et al designed the hash function MQ-HASH and Ding et al proposed a similar construction. In this paper, we analyze the security of multivariate hash functions and conclude that low degree multivariate functions such as MQ-HASH are neither pseudo-random nor unpredictable. There may be trivial collisions and fixed point attacks if the parameters of the compression ftmction have been chosen. And they are also not computation-resistance, which makes MAC forgery easily.

  11. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Marwah Almasri

    2015-12-01

    Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  12. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    Science.gov (United States)

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  13. The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases

    Directory of Open Access Journals (Sweden)

    A.T. Akinwale

    2009-05-01

    Full Text Available In this work, attempt is made to select three good hash functions which uniformly distribute hash values that permute their internal states and allow the input bits to generate different output bits. These functions are used in different levels of hash tables that are coded in Java Programming Language and a quite number of data records serve as primary data for testing the performances. The result shows that the two-level hash tables with three different hash functions give a superior performance over one-level hash table with two hash functions or zero-level hash table with one function in term of reducing the conflict keys and quick lookup for a particular element. The result assists to reduce the complexity of join operation in query language from O( n2 to O( 1 by placing larger query result, if any, in multilevel hash tables with multiple hash functions and generate shorter query result.

  14. Cross-Modality Hashing with Partial Correspondence

    OpenAIRE

    Gu, Yun; Xue, Haoyang; Yang,Jie

    2015-01-01

    Learning a hashing function for cross-media search is very desirable due to its low storage cost and fast query speed. However, the data crawled from Internet cannot always guarantee good correspondence among different modalities which affects the learning for hashing function. In this paper, we focus on cross-modal hashing with partially corresponded data. The data without full correspondence are made in use to enhance the hashing performance. The experiments on Wiki and NUS-WIDE datasets de...

  15. Cryptographic hash functions. Trends and challenges

    Directory of Open Access Journals (Sweden)

    Rodica Tirtea

    2009-10-01

    Full Text Available Hash functions are important in cryptography due to their use in data integrity and message authentication. Different cryptographicimplementations rely on the performance and strength of hash functions to answer the need for integrity and authentication. This paper gives an overview of cryptographic hash functions used or evaluated today. Hash functions selected in NESSIE and CRYPTREC projects are shortly presented. SHA-3 selection initiative is alsointroduced.

  16. Authenticated hash tables

    DEFF Research Database (Denmark)

    Triandopoulos, Nikolaos; Papamanthou, Charalampos; Tamassia, Roberto

    2008-01-01

    Hash tables are fundamental data structures that optimally answer membership queries. Suppose a client stores n elements in a hash table that is outsourced at a remote server so that the client can save space or achieve load balancing. Authenticating the hash table functionality, i.e., verifying ...

  17. Attacks on hash functions and applications

    NARCIS (Netherlands)

    Stevens, Marc Martinus Jacobus

    2012-01-01

    Cryptographic hash functions compute a small fixed-size hash value for any given message. A main application is in digital signatures which require that it must be hard to find collisions, i.e., two different messages that map to the same hash value. In this thesis we provide an analysis of the secu

  18. A secured Cryptographic Hashing Algorithm

    CERN Document Server

    Mohanty, Rakesh; Bishi, Sukant kumar

    2010-01-01

    Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm th...

  19. Spongent: A lightweight hash function

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knežević, Miroslav; Leander, Gregor

    2011-01-01

    This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations...... of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches...

  20. Model-based vision using geometric hashing

    Science.gov (United States)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  1. An Efficient Cryptographic Hash Algorithm (BSA)

    CERN Document Server

    Mukherjee, Subhabrata; Laha, Anirban

    2012-01-01

    Recent cryptanalytic attacks have exposed the vulnerabilities of some widely used cryptographic hash functions like MD5 and SHA-1. Attacks in the line of differential attacks have been used to expose the weaknesses of several other hash functions like RIPEMD, HAVAL. In this paper we propose a new efficient hash algorithm that provides a near random hash output and overcomes some of the earlier weaknesses. Extensive simulations and comparisons with some existing hash functions have been done to prove the effectiveness of the BSA, which is an acronym for the name of the 3 authors.

  2. Supervised Discrete Hashing With Relaxation.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Sun, Zhenan; Tao, Dacheng; Tan, Tieniu

    2016-12-29

    Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called ''supervised discrete hashing with relaxation'' (SDHR) based on ''supervised discrete hashing'' (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target. In SDHR, the regression target is instead optimized. The optimized regression target matrix satisfies a large margin constraint for correct classification of each example. Compared with SDH, which uses the traditional zero-one matrix, SDHR utilizes the learned regression target matrix and, therefore, more accurately measures the classification error of the regression model and is more flexible. As expected, SDHR generally outperforms SDH. Experimental results on two large-scale image data sets (CIFAR-10 and MNIST) and a large-scale and challenging face data set (FRGC) demonstrate the effectiveness and efficiency of SDHR.

  3. Spongent: A lightweight hash function

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knežević, Miroslav; Leander, Gregor

    2011-01-01

    in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms...

  4. Algorithm Plans Collision-Free Path for Robotic Manipulator

    Science.gov (United States)

    Backes, Paul; Diaz-Calderon, Antonio

    2007-01-01

    An algorithm has been developed to enable a computer aboard a robot to autonomously plan the path of the manipulator arm of the robot to avoid collisions between the arm and any obstacle, which could be another part of the robot or an external object in the vicinity of the robot. In simplified terms, the algorithm generates trial path segments and tests each segment for potential collisions in an iterative process that ends when a sequence of collision-free segments reaches from the starting point to the destination. The main advantage of this algorithm, relative to prior such algorithms, is computational efficiency: the algorithm is designed to make minimal demands upon the limited computational resources available aboard a robot. This path-planning algorithm utilizes a modified version of the collision-detection method described in "Improved Collision-Detection Method for Robotic Manipulator" (NPO-30356), NASA Tech Briefs, Vol. 27, No. 3 (June 2003), page 72. The method involves utilization of mathematical models of the robot constructed prior to operation and similar models of external objects constructed automatically from sensory data acquired during operation. This method incorporates a previously developed method, known in the art as the method of oriented bounding boxes (OBBs), in which an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs). A multiresolution OBB/OBP representation of the robot and its manipulator arm and a multiresolution OBB representation of external objects (including terrain) are constructed and used in a process in which collisions at successively finer resolutions are detected through computational detection of overlaps

  5. Discriminative Hash Tracking With Group Sparsity.

    Science.gov (United States)

    Du, Dandan; Zhang, Lihe; Lu, Huchuan; Mei, Xue; Li, Xiaoli

    2016-08-01

    In this paper, we propose a novel tracking framework based on discriminative supervised hashing algorithm. Different from previous methods, we treat tracking as a problem of object matching in a binary space. Using the hash functions, all target templates and candidates are mapped into compact binary codes, with which the target matching is conducted effectively. To be specific, we make full use of the label information to assign a compact and discriminative binary code for each sample. And to deal with out-of-sample case, multiple hash functions are trained to describe the learned binary codes, and group sparsity is introduced to the hash projection matrix to select the representative and discriminative features dynamically, which is crucial for the tracker to adapt to target appearance variations. The whole training problem is formulated as an optimization function where the hash codes and hash function are learned jointly. Extensive experiments on various challenging image sequences demonstrate the effectiveness and robustness of the proposed tracker.

  6. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  7. The Power of Simple Tabulation Hashing

    CERN Document Server

    Patrascu, Mihai

    2010-01-01

    Randomized algorithms are often enjoyed for their simplicity, but the hash functions used to yield the desired theoretical guarantees are often neither simple nor practical. Here we show that the simplest possible tabulation hashing provides unexpectedly strong guarantees. The scheme itself dates back to Carter and Wegman (STOC'77). Keys are viewed as consisting of c characters. We initialize c tables T_1, ..., T_c mapping characters to random hash codes. A key x=(x_1, ..., x_q) is hashed to T_1[x_1] xor ... xor T_c[x_c]. While this scheme is not even 4-independent, we show that it provides many of the guarantees that are normally obtained via higher independence, e.g., Chernoff-type concentration, min-wise hashing for estimating set intersection, and cuckoo hashing.

  8. Robust video hashing via multilinear subspace projections.

    Science.gov (United States)

    Li, Mu; Monga, Vishal

    2012-10-01

    The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.

  9. Compressing Neural Networks with the Hashing Trick

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to ...

  10. Hash function based secret sharing scheme designs

    CERN Document Server

    Chum, Chi Sing

    2011-01-01

    Secret sharing schemes create an effective method to safeguard a secret by dividing it among several participants. By using hash functions and the herding hashes technique, we first set up a (t+1, n) threshold scheme which is perfect and ideal, and then extend it to schemes for any general access structure. The schemes can be further set up as proactive or verifiable if necessary. The setup and recovery of the secret is efficient due to the fast calculation of the hash function. The proposed scheme is flexible because of the use of existing hash functions.

  11. On the Insertion Time of Cuckoo Hashing

    CERN Document Server

    Fountoulakis, Nikolaos; Steger, Angelika

    2010-01-01

    Cuckoo hashing is an efficient technique for creating large hash tables with high space utilization and guaranteed constant access times. There, each item can be placed in a location given by any one out of k different hash functions. In this paper we investigate further the random walk heuristic for inserting in an online fashion new items into the hash table. Provided that k > 2 and that the number of items in the table is below (but arbitrarily close) to the theoretically achievable load threshold, we show a polylogarithmic bound for the maximum insertion time that holds with high probability.

  12. The Concept of Collision-Free Motion Planning Using a Dynamic Collision Map

    Directory of Open Access Journals (Sweden)

    Keum-Bae Cho

    2014-09-01

    Full Text Available In this paper, we address a new method for the collision-free motion planning of a mobile robot in dynamic environments. The motion planner is based on the concept of a conventional collision map (CCM, represented on the L(travel length-T(time plane. We extend the CCM with dynamic information about obstacles, such as linear acceleration and angular velocity, providing useful information for estimating variation in the collision map. We first analyse the effect of the dynamic motion of an obstacle in the collision region. We then define the measure of collision dispersion (MOCD. The dynamic collision map (DCM is generated by drawing the MOCD on the CCM. To evaluate a collision-free motion planner using the DCM, we extend the DCM with MOCD, then draw the unreachable region and deadlocked regions. Finally, we construct a collision-free motion planner using the information from the extended DCM.

  13. R-Hash: Hash Function Using Random Quadratic Polynomials Over GF(2)

    OpenAIRE

    Dhananjoy Dey; Noopur Shrotriya; Indranath Sengupta

    2013-01-01

    In this paper we describe an improved version of HF-hash [7] viz. R-hash: Hash Function Using RandomQuadratic Polynomials Over GF(2). The compression function of HF-hash consists of 32 polynomials with64 variables over GF(2), which were taken from the first 32 polynomials of HFE challenge-1 by forcinglast 16 variables as 0. The mode operation used in computing HF-hash was Merkle-Damgard. We haverandomly selected 32 quadratic non-homogeneous polynomials having 64 variables over GF(2) in case o...

  14. Feature Hashing for Large Scale Multitask Learning

    CERN Document Server

    Weinberger, Kilian; Attenberg, Josh; Langford, John; Smola, Alex

    2009-01-01

    Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.

  15. Hash3: Proofs, Analysis and Implementation

    DEFF Research Database (Denmark)

    Gauravaram, Praveen

    2009-01-01

    This report outlines the talks presented at the winter school on Hash3: Proofs, Analysis, and Implementation, ECRYPT II Event on Hash Functions. In general, speakers may not write everything what they talk on the slides. So, this report also outlines such findings following the understanding...

  16. ForBild: efficient robust image hashing

    Science.gov (United States)

    Steinebach, Martin; Liu, Huajian; Yannikos, York

    2012-03-01

    Forensic analysis of image sets today is most often done with the help of cryptographic hashes due to their efficiency, their integration in forensic tools and their excellent reliability in the domain of false detection alarms. A drawback of these hash methods is their fragility to any image processing operation. Even a simple re-compression with JPEG results in an image not detectable. A different approach is to apply image identification methods, allowing identifying illegal images by e.g. semantic models or facing detection algorithms. Their common drawback is a high computational complexity and significant false alarm rates. Robust hashing is a well-known approach sharing characteristics of both cryptographic hashes and image identification methods. It is fast, robust to common image processing and features low false alarm rates. To verify its usability in forensic evaluation, in this work we discuss and evaluate the behavior of an optimized block-based hash.

  17. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  18. Uniform Hashing in Constant Time and Optimal Space

    DEFF Research Database (Denmark)

    Pagh, Anna Östlin; Pagh, Rasmus

    2008-01-01

    Many algorithms and data structures employing hashing have been analyzed under the uniform hashing assumption, i.e., the assumption that hash functions behave like truly random functions. Starting with the discovery of universal hash functions, many researchers have studied to what extent this th...

  19. Collision free path generation in 3D with turning and pitch radius constraints for aerial vehicles

    DEFF Research Database (Denmark)

    Schøler, F.; La Cour-Harbo, A.; Bisgaard, M.

    2009-01-01

    assumes that most of the aircraft structural and dynamic limitations can be formulated as a turn radius constraint, and that any two consecutive waypoints have line-of-sight. The generated trajectories are collision free and also satisfy a constraint on the minimum admissible turning radius, while...

  20. Towards a Collision-Free WLAN: Dynamic Parameter Adjustment in CSMA/E2CA

    Directory of Open Access Journals (Sweden)

    Bellalta Boris

    2011-01-01

    Full Text Available Carrier sense multiple access with enhanced collision avoidance (CSMA/ECA is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA/ECA and the well-known CSMA/CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA/E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA/E2CA converges quicker to collision-free operation and delivers higher performance than CSMA/ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation.

  1. Hash functions and triangular mesh reconstruction*1

    Science.gov (United States)

    Hrádek, Jan; Kuchař, Martin; Skala, Václav

    2003-07-01

    Some applications use data formats (e.g. STL file format), where a set of triangles is used to represent the surface of a 3D object and it is necessary to reconstruct the triangular mesh with adjacency information. It is a lengthy process for large data sets as the time complexity of this process is O( N log N), where N is number of triangles. Triangular mesh reconstruction is a general problem and relevant algorithms can be used in GIS and DTM systems as well as in CAD/CAM systems. Many algorithms rely on space subdivision techniques while hash functions offer a more effective solution to the reconstruction problem. Hash data structures are widely used throughout the field of computer science. The hash table can be used to speed up the process of triangular mesh reconstruction but the speed strongly depends on hash function properties. Nevertheless the design or selection of the hash function for data sets with unknown properties is a serious problem. This paper describes a new hash function, presents the properties obtained for large data sets, and discusses validity of the reconstructed surface. Experimental results proved theoretical considerations and advantages of hash function use for mesh reconstruction.

  2. Protein sequence classification using feature hashing.

    Science.gov (United States)

    Caragea, Cornelia; Silvescu, Adrian; Mitra, Prasenjit

    2012-06-21

    Recent advances in next-generation sequencing technologies have resulted in an exponential increase in the rate at which protein sequence data are being acquired. The k-gram feature representation, commonly used for protein sequence classification, usually results in prohibitively high dimensional input spaces, for large values of k. Applying data mining algorithms to these input spaces may be intractable due to the large number of dimensions. Hence, using dimensionality reduction techniques can be crucial for the performance and the complexity of the learning algorithms. In this paper, we study the applicability of feature hashing to protein sequence classification, where the original high-dimensional space is "reduced" by hashing the features into a low-dimensional space, using a hash function, i.e., by mapping features into hash keys, where multiple features can be mapped (at random) to the same hash key, and "aggregating" their counts. We compare feature hashing with the "bag of k-grams" approach. Our results show that feature hashing is an effective approach to reducing dimensionality on protein sequence classification tasks.

  3. Robust hashing for 3D models

    Science.gov (United States)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  4. Secure OFDM communications based on hashing algorithms

    Science.gov (United States)

    Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele

    2007-10-01

    In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.

  5. TH*: Scalable Distributed Trie Hashing

    Directory of Open Access Journals (Sweden)

    Aridj Mohamed

    2010-11-01

    Full Text Available In today's world of computers, dealing with huge amounts of data is not unusual. The need to distribute this data in order to increase its availability and increase the performance of accessing it is more urgent than ever. For these reasons it is necessary to develop scalable distributed data structures. In this paper we propose a TH* distributed variant of the Trie Hashing data structure. First we propose Thsw new version of TH without node Nil in digital tree (trie, then this version will be adapted to multicomputer environment. The simulation results reveal that TH* is scalable in the sense that it grows gracefully, one bucket at a time, to a large number of servers, also TH* offers a good storage space utilization and high query efficiency special for ordering operations.

  6. A hashing technique using separate binary tree

    Directory of Open Access Journals (Sweden)

    Md Mehedi Masud

    2006-11-01

    Full Text Available It is always a major demand to provide efficient retrieving and storing of data and information in a large database system. For this purpose, many file organization techniques have already been developed, and much additional research is still going on. Hashing is one developed technique. In this paper we propose an enhanced hashing technique that uses a hash table combined with a binary tree, searching on the binary representation of a portion the primary key of records that is associated with each index of the hash table. The paper contains numerous examples to describe the technique. The technique shows significant improvements in searching, insertion, and deletion for systems with huge amounts of data. The paper also presents the mathematical analysis of the proposed technique and comparative results.

  7. Geometric hashing and object recognition

    Science.gov (United States)

    Stiller, Peter F.; Huber, Birkett

    1999-09-01

    We discuss a new geometric hashing method for searching large databases of 2D images (or 3D objects) to match a query built from geometric information presented by a single 3D object (or single 2D image). The goal is to rapidly determine a small subset of the images that potentially contain a view of the given object (or a small set of objects that potentially match the item in the image). Since this must be accomplished independent of the pose of the object, the objects and images, which are characterized by configurations of geometric features such as points, lines and/or conics, must be treated using a viewpoint invariant formulation. We are therefore forced to characterize these configurations in terms of their 3D and 2D geometric invariants. The crucial relationship between the 3D geometry and its 'residual' in 2D is expressible as a correspondence (in the sense of algebraic geometry). Computing a set of generating equations for the ideal of this correspondence gives a complete characterization of the view of independent relationships between an object and all of its possible images. Once a set of generators is in hand, it can be used to devise efficient recognition algorithms and to give an efficient geometric hashing scheme. This requires exploiting the form and symmetry of the equations. The result is a multidimensional access scheme whose efficiency we examine. Several potential directions for improving this scheme are also discussed. Finally, in a brief appendix, we discuss an alternative approach to invariants for generalized perspective that replaces the standard invariants by a subvariety of a Grassmannian. The advantage of this is that one can circumvent many annoying general position assumptions and arrive at invariant equations (in the Plucker coordinates) that are more numerically robust in applications.

  8. Cryptanalysis of Tav-128 hash function

    DEFF Research Database (Denmark)

    Kumar, Ashish; Sanadhya, Somitra Kumar; Gauravaram, Praveen

    2010-01-01

    Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag...... a 64-bit permutation from 32-bit messages. This could be a useful light weight primitive for future RFID protocols....

  9. Cryptanalysis of Tav-128 hash function

    DEFF Research Database (Denmark)

    Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag...... a 64-bit permutation from 32-bit messages. This could be a useful light weight primitive for future RFID protocols....

  10. Optimal hash arrangement of tentacles in jellyfish

    Science.gov (United States)

    Okabe, Takuya; Yoshimura, Jin

    2016-06-01

    At first glance, the trailing tentacles of a jellyfish appear to be randomly arranged. However, close examination of medusae has revealed that the arrangement and developmental order of the tentacles obey a mathematical rule. Here, we show that medusa jellyfish adopt the best strategy to achieve the most uniform distribution of a variable number of tentacles. The observed order of tentacles is a real-world example of an optimal hashing algorithm known as Fibonacci hashing in computer science.

  11. Robust hashing with local models for approximate similarity search.

    Science.gov (United States)

    Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang

    2014-07-01

    Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.

  12. A Dynamic Hashing Algorithm Suitable for Embedded System

    Directory of Open Access Journals (Sweden)

    Li Jianwei

    2013-06-01

    Full Text Available With the increasing of the data numbers, the linear hashing will be a lot of overflow blocks result from Data skew and the index size of extendible hash will surge so as to waste too much memory. This lead to the above two Typical Dynamic hashing algorithm don’t suitable for embedded system that need certain real-time requirements and memory resources are very scarce. To solve this problem, this paper was proposed a dynamic hashing algorithm suitable for embedded system combining with the characteristic of extendible hashing and linear hashing.it is no overflow buckets and the index size is proportional to the adjustment number.

  13. Instance-Aware Hashing for Multi-Label Image Retrieval.

    Science.gov (United States)

    Lai, Hanjiang; Yan, Pan; Shu, Xiangbo; Wei, Yunchao; Yan, Shuicheng

    2016-06-01

    Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods.

  14. SHP: Smooth Hypocycloidal Paths with Collision-Free and Decoupled Multi-Robot Path Planning

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-06-01

    Full Text Available Generating smooth and continuous paths for robots with collision avoidance, which avoid sharp turns, is an important problem in the context of autonomous robot navigation. This paper presents novel smooth hypocycloidal paths (SHP for robot motion. It is integrated with collision-free and decoupled multi-robot path planning. An SHP diffuses (i.e., moves points along segments the points of sharp turns in the global path of the map into nodes, which are used to generate smooth hypocycloidal curves that maintain a safe clearance in relation to the obstacles. These nodes are also used as safe points of retreat to avoid collision with other robots. The novel contributions of this work are as follows: (1 The proposed work is the first use of hypocycloid geometry to produce smooth and continuous paths for robot motion. A mathematical analysis of SHP generation in various scenarios is discussed. (2 The proposed work is also the first to consider the case of smooth and collision-free path generation for a load carrying robot. (3 Traditionally, path smoothing and collision avoidance have been addressed as separate problems. This work proposes integrated and decoupled collision-free multi-robot path planning. ‵Node caching‵ is proposed to improve efficiency. A decoupled approach with local communication enables the paths of robots to be dynamically changed. (4 A novel ‵multi-robot map update‵ in case of dynamic obstacles in the map is proposed, such that robots update other robots about the positions of dynamic obstacles in the map. A timestamp feature ensures that all the robots have the most updated map. Comparison between SHP and other path smoothing techniques and experimental results in real environments confirm that SHP can generate smooth paths for robots and avoid collision with other robots through local communication.

  15. Exploiting the HASH Planetary Nebula Research Platform

    CERN Document Server

    Parker, Quentin A; Frew, David J

    2016-01-01

    The HASH (Hong Kong/ AAO/ Strasbourg/ H{\\alpha}) planetary nebula research platform is a unique data repository with a graphical interface and SQL capability that offers the community powerful, new ways to undertake Galactic PN studies. HASH currently contains multi-wavelength images, spectra, positions, sizes, morphologies and other data whenever available for 2401 true, 447 likely, and 692 possible Galactic PNe, for a total of 3540 objects. An additional 620 Galactic post-AGB stars, pre-PNe, and PPN candidates are included. All objects were classified and evaluated following the precepts and procedures established and developed by our group over the last 15 years. The complete database contains over 6,700 Galactic objects including the many mimics and related phenomena previously mistaken or confused with PNe. Curation and updating currently occurs on a weekly basis to keep the repository as up to date as possible until the official release of HASH v1 planned in the near future.

  16. Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.

    Science.gov (United States)

    Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong

    2016-02-01

    Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both

  17. Large-Scale Unsupervised Hashing with Shared Structure Learning.

    Science.gov (United States)

    Liu, Xianglong; Mu, Yadong; Zhang, Danchen; Lang, Bo; Li, Xuelong

    2015-09-01

    Hashing methods are effective in generating compact binary signatures for images and videos. This paper addresses an important open issue in the literature, i.e., how to learn compact hash codes by enhancing the complementarity among different hash functions. Most of prior studies solve this problem either by adopting time-consuming sequential learning algorithms or by generating the hash functions which are subject to some deliberately-designed constraints (e.g., enforcing hash functions orthogonal to one another). We analyze the drawbacks of past works and propose a new solution to this problem. Our idea is to decompose the feature space into a subspace shared by all hash functions and its complementary subspace. On one hand, the shared subspace, corresponding to the common structure across different hash functions, conveys most relevant information for the hashing task. Similar to data de-noising, irrelevant information is explicitly suppressed during hash function generation. On the other hand, in case that the complementary subspace also contains useful information for specific hash functions, the final form of our proposed hashing scheme is a compromise between these two kinds of subspaces. To make hash functions not only preserve the local neighborhood structure but also capture the global cluster distribution of the whole data, an objective function incorporating spectral embedding loss, binary quantization loss, and shared subspace contribution is introduced to guide the hash function learning. We propose an efficient alternating optimization method to simultaneously learn both the shared structure and the hash functions. Experimental results on three well-known benchmarks CIFAR-10, NUS-WIDE, and a-TRECVID demonstrate that our approach significantly outperforms state-of-the-art hashing methods.

  18. Security analysis of robust perceptual hashing

    Science.gov (United States)

    Koval, Oleksiy; Voloshynovskiy, Sviatoslav; Beekhof, Fokko; Pun, Thierry

    2008-02-01

    In this paper we considered the problem of security analysis of robust perceptual hashing in authentication application. The main goal of our analysis was to estimate the amount of trial efforts of the attacker, who is acting within the Kerckhoffs security principle, to reveal a secret key. For this purpose, we proposed to use Shannon equivocation that provides an estimate of complexity of the key search performed based on all available prior information and presented its application to security evaluation of particular robust perceptual hashing algorithms.

  19. PROPERTIES AND APPROACH OF CRYPTOGRAPHIC HASH ALGORITHMS

    Directory of Open Access Journals (Sweden)

    T.LALITHA

    2010-06-01

    Full Text Available The importance of hash functions for protecting the authenticity of information is demonstrated. Applications include integrity protection, conventional message authentication and digital signatures. An overview is given of the study of basic building blocks of cryptographic hash functions leads to the study of the cryptographic properties of Boolean functions and the information theoretic approach to authentication is described. An overview is given of the complexity theoretic definitions and constructions .New criteria are defined and functions satisfying new and existing criteria are studied.

  20. A Secure Hash Function MD-192 With Modified Message Expansion

    CERN Document Server

    Tiwari, Harshvardhan

    2010-01-01

    Cryptographic hash functions play a central role in cryptography. Hash functions were introduced in cryptology to provide message integrity and authentication. MD5, SHA1 and RIPEMD are among the most commonly used message digest algorithm. Recently proposed attacks on well known and widely used hash functions motivate a design of new stronger hash function. In this paper a new approach is presented that produces 192 bit message digest and uses a modified message expansion mechanism which generates more bit difference in each working variable to make the algorithm more secure. This hash function is collision resistant and assures a good compression and preimage resistance.

  1. Collision-resistant hash function based on composition of functions

    CERN Document Server

    Ndoundam, Rene

    2011-01-01

    cryptographic hash function is a deterministic procedure that compresses an arbitrary block of numerical data and returns a fixed-size bit string. There exist many hash functions: MD5, HAVAL, SHA, ... It was reported that these hash functions are not longer secure. Our work is focused in the construction of a new hash function based on composition of functions. The construction used the NP-completeness of Three-dimensional contingency tables and the relaxation of the constraint that a hash function should also be a compression function.

  2. Symmetry implies independence

    CERN Document Server

    Renner, R

    2007-01-01

    Given a quantum system consisting of many parts, we show that symmetry of the system's state, i.e., invariance under swappings of the subsystems, implies that almost all of its parts are virtually identical and independent of each other. This result generalises de Finetti's classical representation theorem for infinitely exchangeable sequences of random variables as well as its quantum-mechanical analogue. It has applications in various areas of physics as well as information theory and cryptography. For example, in experimental physics, one typically collects data by running a certain experiment many times, assuming that the individual runs are mutually independent. Our result can be used to justify this assumption.

  3. Conception and limits of robust perceptual hashing: towards side information assisted hash functions

    Science.gov (United States)

    Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Beekhof, Fokko; Pun, Thierry

    2009-02-01

    In this paper, we consider some basic concepts behind the design of existing robust perceptual hashing techniques for content identification. We show the limits of robust hashing from the communication perspectives as well as propose an approach that is able to overcome these shortcomings in certain setups. The consideration is based on both achievable rate and probability of error. We use the fact that most robust hashing algorithms are based on dimensionality reduction using random projections and quantization. Therefore, we demonstrate the corresponding achievable rate and probability of error based on random projections and compare with the results for the direct domain. The effect of dimensionality reduction is studied and the corresponding approximations are provided based on the Johnson-Lindenstrauss lemma. Side-information assisted robust perceptual hashing is proposed as a solution to the above shortcomings.

  4. Single-step collision-free trajectory planning of biped climbing robots in spatial trusses.

    Science.gov (United States)

    Zhu, Haifei; Guan, Yisheng; Chen, Shengjun; Su, Manjia; Zhang, Hong

    For a biped climbing robot with dual grippers to climb poles, trusses or trees, feasible collision-free climbing motion is inevitable and essential. In this paper, we utilize the sampling-based algorithm, Bi-RRT, to plan single-step collision-free motion for biped climbing robots in spatial trusses. To deal with the orientation limit of a 5-DoF biped climbing robot, a new state representation along with corresponding operations including sampling, metric calculation and interpolation is presented. A simple but effective model of a biped climbing robot in trusses is proposed, through which the motion planning of one climbing cycle is transformed to that of a manipulator. In addition, the pre- and post-processes are introduced to expedite the convergence of the Bi-RRT algorithm and to ensure the safe motion of the climbing robot near poles as well. The piecewise linear paths are smoothed by utilizing cubic B-spline curve fitting. The effectiveness and efficiency of the presented Bi-RRT algorithm for climbing motion planning are verified by simulations.

  5. Virtual Velocity Vector-based Offline Collision-free Path Planning of Industrial Robotic Manipulator

    Directory of Open Access Journals (Sweden)

    Fan Ouyang

    2015-09-01

    Full Text Available Currently, industrial robotic manipulators are applied in many manufacturing applications. In most cases, an industrial environment is a cluttered and complex one where moving obstacles may exist and hinder the movement of robotic manipulators. Therefore, a robotic manipulator not only has to avoid moving obstacles, but also needs to fulfill the manufacturing requirements of smooth movement in fixed tact time. Thus, this paper proposes a virtual velocity vector-based algorithm of offline collision-free path planning for manipulator arms in a controlled industrial environment. The minimum distance between a manipulator and a moving obstacle can be maintained at an expected value by utilizing our proposed algorithm with established offline collision-free path-planning and trajectory generating systems. Furthermore, both joint space velocity and Cartesian space velocity of generated time-efficient trajectory are continuous and smooth. In addition, the vector of detour velocity in a 3D environment is determined and depicted. Simulation results indicate that detour velocity can shorten the total task time as well as escaping the local minimal effectively. In summary, our approach can fulfill both safety requirements of collision avoidance of moving obstacles and manufacturing requirements of smooth movement within fixed tact time in an industrial environment.

  6. Code Specialization for Memory Efficient Hash Tries

    NARCIS (Netherlands)

    Steindorfer, M.; Vinju, J.J.

    2014-01-01

    The hash trie data structure is a common part in standard collection libraries of JVM programming languages such as Clojure and Scala. It enables fast immutable implementations of maps, sets, and vectors, but it requires considerably more memory than an equivalent array-based data structure. This hi

  7. Dynamic External Hashing: The Limit of Buffering

    CERN Document Server

    Wei, Zhewei; Zhang, Qin

    2008-01-01

    Hash tables are one of the most fundamental data structures in computer science, in both theory and practice. They are especially useful in external memory, where their query performance approaches the ideal cost of just one disk access. Knuth gave an elegant analysis showing that with some simple collision resolution strategies such as linear probing or chaining, the expected average number of disk I/Os of a lookup is merely $1+1/2^{\\Omega(b)}$, where each I/O can read a disk block containing $b$ items. Inserting a new item into the hash table also costs $1+1/2^{\\Omega(b)}$ I/Os, which is again almost the best one can do if the hash table is entirely stored on disk. However, this assumption is unrealistic since any algorithm operating on an external hash table must have some internal memory (at least $\\Omega(1)$ blocks) to work with. The availability of a small internal memory buffer can dramatically reduce the amortized insertion cost to $o(1)$ I/Os for many external memory data structures. In this paper we...

  8. Cryptanalysis of the LAKE Hash Family

    DEFF Research Database (Denmark)

    Biryukov, Alex; Gauravaram, Praveen; Guo, Jian

    2009-01-01

    We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show three different collision and near-collision attacks on the compression function. The first attack...

  9. Accelerating read mapping with FastHASH.

    Science.gov (United States)

    Xin, Hongyi; Lee, Donghyuk; Hormozdiari, Farhad; Yedkar, Samihan; Mutlu, Onur; Alkan, Can

    2013-01-01

    With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.

  10. Collision-free coordination of fiber positioners in multi-object spectrographs

    Science.gov (United States)

    Makarem, Laleh; Kneib, Jean-Paul; Gillet, Denis

    2016-07-01

    Many fiber-fed spectroscopic survey projects, such as DESI, PFS and MOONS, will use thousands of fiber positioners packed at a focal plane. To maximize observation time, the positioners need to move simultaneously and reach their targets swiftly. We have previously presented a motion planning method based on a decentralized navigation function for the collision-free coordination of the fiber positioners in DESI. In MOONS, the end effector of each positioner handling the fiber can reach the centre of its neighbours. There is therefore a risk of collision with up to 18 surrounding positioners in the chosen dense hexagonal configuration. Moreover, the length of the second arm of the positioner is almost twice the length of the first one. As a result, the geometry of the potential collision zone between two positioners is not limited to the extremity of their end-effector, but surrounds the second arm. In this paper, we modify the navigation function to take into account the larger collision zone resulting from the extended geometrical shape of the positioners. The proposed navigation function takes into account the configuration of the positioners as well as the constraints on the actuators, such as their maximal velocity and their mechanical clearance. Considering the fact that all the positioners' bases are fixed to the focal plane, collisions can occur locally and the risk of collision is limited to the 18 surrounding positioners. The decentralizing motion planning and trajectory generation takes advantage of this limited number of positioners and the locality of collisions, hence significantly reduces the complexity of the algorithm to a linear order. The linear complexity ensures short computation time. In addition, the time needed to move all the positioners to their targets is independent of the number of positioners. These two key advantages of the chosen decentralization approach turn this method to a promising solution for the collision-free motion

  11. Security Analysis of Randomize-Hash-then-Sign Digital Signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2012-01-01

    At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar...... a variant of the RMX hash function mode and published this standard in the Special Publication (SP) 800-106.In this article, we first discuss a generic online birthday existential forgery attack of Dang and Perlner on the RMX-hash-then-sign schemes. We show that a variant of this attack can be applied...... functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online...

  12. Explicit and Efficient Hash Families Suffice for Cuckoo Hashing with a Stash

    CERN Document Server

    Aumüller, Martin; Woelfel, Philipp

    2012-01-01

    It is shown that for cuckoo hashing with a stash as proposed by Kirsch, Mitzenmacher, and Wieder (2008) families of very simple hash functions can be used, maintaining the favorable performance guarantees: with stash size $s$ the probability of a rehash is $O(1/n^{s+1})$, and the evaluation time is $O(s)$. Instead of the full randomness needed for the analysis of Kirsch et al. and of Kutzelnigg (2010) (resp. $\\Theta(\\log n)$-wise independence for standard cuckoo hashing) the new approach even works with 2-wise independent hash families as building blocks. Both construction and analysis build upon the work of Dietzfelbinger and Woelfel (2003). The analysis, which can also be applied to the fully random case, utilizes a graph counting argument and is much simpler than previous proofs. As a byproduct, an algorithm for simulating uniform hashing is obtained. While it requires about twice as much space as the most space efficient solutions, it is attractive because of its simple and direct structure.

  13. Analysis of a wavelet-based robust hash algorithm

    Science.gov (United States)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  14. Fair Micropayment System Based on Hash Chains

    Institute of Scientific and Technical Information of China (English)

    YANG Zongkai; LANG Weimin; TAN Yunmeng

    2005-01-01

    Micropayment schemes usually do not provide fairness, which means that either the customer or the merchant, or both, can cheat each other and gain a financial advantage by misusing the protocols. This paper proposes an efficient hash chain-based micropayment scheme, which is an offline, prepaid scheme that supports simpler divisibility of digital coins. In the execution of payment protocol, the customer's disbursement and the merchant's submittal are performed step by step, whoever cannot gain addition profits even if he breaks off the transaction. The hash chain can also be used for transactions with different merchants. Unlike other micropayment schemes, e.g., PayWord, no public-key operation is required,which improves the efficiency. The scheme also provides restricted anonymity.

  15. Locality-sensitive Hashing without False Negatives

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    2016-01-01

    (n)/k, where n is the number of points in the data set and k ∊ N, and differs from it by at most a factor ln(4) in the exponent for general values of cr. As a consequence, LSH-based similarity search in Hamming space can avoid the problem of false negatives at little or no cost in efficiency. Read More: http......We consider a new construction of locality-sensitive hash functions for Hamming space that is covering in the sense that is it guaranteed to produce a collision for every pair of vectors within a given radius r. The construction is efficient in the sense that the expected number of hash collisions...

  16. Hash Functions and Information Theoretic Security

    Science.gov (United States)

    Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  17. Hash functions and information theoretic security

    DEFF Research Database (Denmark)

    Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid;

    2009-01-01

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...... attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant....

  18. Implementation of cryptographic hash function SHA256 in C++

    Science.gov (United States)

    Shrivastava, Akash

    2012-02-01

    This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.

  19. Chaos-based hash function (CBHF) for cryptographic applications

    Energy Technology Data Exchange (ETDEWEB)

    Amin, Mohamed [Dept. of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511 (Egypt)], E-mail: mamin04@yahoo.com; Faragallah, Osama S. [Dept. of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952 (Egypt)], E-mail: osam_sal@yahoo.com; Abd El-Latif, Ahmed A. [Dept. of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511 (Egypt)], E-mail: ahmed_rahiem@yahoo.com

    2009-10-30

    As the core of cryptography, hash is the basic technique for information security. Many of the hash functions generate the message digest through a randomizing process of the original message. Subsequently, a chaos system also generates a random behavior, but at the same time a chaos system is completely deterministic. In this paper, an algorithm for one-way hash function construction based on chaos theory is introduced. Theoretical analysis and computer simulation indicate that the algorithm can satisfy all performance requirements of hash function in an efficient and flexible manner and secure against birthday attacks or meet-in-the-middle attacks, which is good choice for data integrity or authentication.

  20. Hashing in computer science fifty years of slicing and dicing

    CERN Document Server

    Konheim, Alan G

    2009-01-01

    Written by one of the developers of the technology, Hashing is both a historical document on the development of hashing and an analysis of the applications of hashing in a society increasingly concerned with security. The material in this book is based on courses taught by the author, and key points are reinforced in sample problems and an accompanying instructor s manual. Graduate students and researchers in mathematics, cryptography, and security will benefit from this overview of hashing and the complicated mathematics that it requires

  1. A MICROPAYMENT SCHEME BASED ON WEIGHTED MULTI DIMENSIONAL HASH CHAIN

    Institute of Scientific and Technical Information of China (English)

    Liu Yining; Hu Lei; Liu Heguo

    2006-01-01

    Hash chain and its generalization-Multi-Dimensional Hash Chain (MDHC) have been widely used in the design of micropayment due to its simplicity and efficiency. In this letter, a more efficient variant of MDHC, called WMDHC, which endows in the structure of MDHC a weight value for each hash value through a well-defined mapping, is proposed. The average hash operation number of WMDHC is log(2m/t), which is better than log(m) of MDHC when the parameter t is typically suggested as t = 7.

  2. Efficient nearest neighbors via robust sparse hashing.

    Science.gov (United States)

    Cherian, Anoop; Sra, Suvrit; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2014-08-01

    This paper presents a new nearest neighbor (NN) retrieval framework: robust sparse hashing (RSH). Our approach is inspired by the success of dictionary learning for sparse coding. Our key idea is to sparse code the data using a learned dictionary, and then to generate hash codes out of these sparse codes for accurate and fast NN retrieval. But, direct application of sparse coding to NN retrieval poses a technical difficulty: when data are noisy or uncertain (which is the case with most real-world data sets), for a query point, an exact match of the hash code generated from the sparse code seldom happens, thereby breaking the NN retrieval. Borrowing ideas from robust optimization theory, we circumvent this difficulty via our novel robust dictionary learning and sparse coding framework called RSH, by learning dictionaries on the robustified counterparts of the perturbed data points. The algorithm is applied to NN retrieval on both simulated and real-world data. Our results demonstrate that RSH holds significant promise for efficient NN retrieval against the state of the art.

  3. Novel Duplicate Address Detection with Hash Function.

    Science.gov (United States)

    Song, GuangJia; Ji, ZhenZhou

    2016-01-01

    Duplicate address detection (DAD) is an important component of the address resolution protocol (ARP) and the neighbor discovery protocol (NDP). DAD determines whether an IP address is in conflict with other nodes. In traditional DAD, the target address to be detected is broadcast through the network, which provides convenience for malicious nodes to attack. A malicious node can send a spoofing reply to prevent the address configuration of a normal node, and thus, a denial-of-service attack is launched. This study proposes a hash method to hide the target address in DAD, which prevents an attack node from launching destination attacks. If the address of a normal node is identical to the detection address, then its hash value should be the same as the "Hash_64" field in the neighboring solicitation message. Consequently, DAD can be successfully completed. This process is called DAD-h. Simulation results indicate that address configuration using DAD-h has a considerably higher success rate when under attack compared with traditional DAD. Comparative analysis shows that DAD-h does not require third-party devices and considerable computing resources; it also provides a lightweight security resolution.

  4. 9 CFR 319.303 - Corned beef hash.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corned beef hash. 319.303 Section 319... Products § 319.303 Corned beef hash. (a) “Corned Beef Hash” is the semi-solid food product in the form of a compact mass which is prepared with beef, potatoes, curing agents, seasonings, and any of the...

  5. Structured trajectory planning of collision-free lane change using the vehicle-driver integration data

    Institute of Scientific and Technical Information of China (English)

    WANG JiangFeng; ZHANG Qian; ZHANG ZhiQ; YAN XueDong

    2016-01-01

    In this paper,the structured trajectory planning of lane change in collision-free road environment is studied and validated using the vehicle-driver integration data,and a new trajectory planning model for lane change is proposed based on linear offset and sine function to balance driver comfort and vehicle dynamics.The trajectory curvature of the proposed model is continuous without mutation,and the zero-based curvature at the starting and end points during lane change assures the motion direction of end points in parallel with the lane line.The field experiment are designed to collect the vehicle-driver integration data,such as steering angle,brake pedal angel and accelerator pedal angel.The correction Correlation analysis of lane-changing maneuver and influencing variables is conducted to obtain the significant variables that can be used to calibrate and test the proposed model.The results demonstrate that vehicle velocity and Y-axis acceleration have significant effects on the lane-changing maneuver,so that the model recalibrated by the samples of different velocity ranges and Y-axis accelerations has better fitted performance compared with the model calibrated by the sample trajectory.In addition,the proposed model presents a decreasing tendency of the lane change trajectory fitted MAE with the increase of time span of calibrating samples at the starting stage.

  6. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    Science.gov (United States)

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  7. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-08-18

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  8. Collision-free motion planning for fiber positioner robots: discretization of velocity profiles

    CERN Document Server

    Makarem, Laleh; Gillet, Denis; Bleuler, Hannes; Bouri, Mohamed; Hörler, Philipp; Jenni, Laurent; Prada, Francisco; Sanchez, Justo

    2014-01-01

    The next generation of large-scale spectroscopic survey experiments such as DESI, will use thousands of fiber positioner robots packed on a focal plate. In order to maximize the observing time with this robotic system we need to move in parallel the fiber-ends of all positioners from the previous to the next target coordinates. Direct trajectories are not feasible due to collision risks that could undeniably damage the robots and impact the survey operation and performance. We have previously developed a motion planning method based on a novel decentralized navigation function for collision-free coordination of fiber positioners. The navigation function takes into account the configuration of positioners as well as their envelope constraints. The motion planning scheme has linear complexity and short motion duration (~2.5 seconds with the maximum speed of 30 rpm for the positioner), which is independent of the number of positioners. These two key advantages of the decentralization designate the method as a pr...

  9. Continuous Genetic Algorithms for Collision-Free Cartesian Path Planning of Robot Manipulators

    Directory of Open Access Journals (Sweden)

    Za'er S. Abo-Hammour

    2011-12-01

    Full Text Available A novel continuous genetic algorithm (CGA along with distance algorithm for solving collisions‐free path planning problem for robot manipulators is presented in this paper. Given the desired Cartesian path to be followed by the manipulator, the robot configuration as described by the D‐H parameters, and the available stationary obstacles in the workspace of the manipulator, the proposed approach will autonomously select a collision free path for the manipulator that minimizes the deviation between the generated and the desired Cartesian path, satisfy the joints limits of the manipulator, and maximize the minimum distance between the manipulator links and the obstacles. One of the main features of the algorithm is that it avoids the manipulator kinematic singularities due to the inclusion of forward kinematics model in the calculations instead of the inverse kinematics. The new robot path planning approach has been applied to two different robot configurations; 2R and PUMA 560, as non‐ redundant manipulators. Simulation results show that the proposed CGA will always select the safest path avoiding obstacles within the manipulator workspace regardless of whether there is a unique feasible solution, in terms of joint limits, or there are multiple feasible solutions. In addition to that, the generated path in Cartesian space will be of very minimal deviation from the desired one.

  10. Secure Collision-Free Frequency Hopping for OFDMA-Based Wireless Networks

    Directory of Open Access Journals (Sweden)

    Leonard Lightfoot

    2009-01-01

    Full Text Available This paper considers highly efficient antijamming system design using secure dynamic spectrum access control. First, we propose a collision-free frequency hopping (CFFH system based on the OFDMA framework and an innovative secure subcarrier assignment scheme. The CFFH system is designed to ensure that each user hops to a new set of subcarriers in a pseudorandom manner at the beginning of each hopping period, and different users always transmit on nonoverlapping sets of subcarriers. The CFFH scheme can effectively mitigate the jamming interference, including both random jamming and follower jamming. Moreover, it has the same high spectral efficiency as that of the OFDM system and can relax the complex frequency synchronization problem suffered by conventional FH. Second, we enhance the antijamming property of CFFH by incorporating the space-time coding (STC scheme. The enhanced system is referred to as STC-CFFH. Our analysis indicates that the combination of space-time coding and CFFH is particularly powerful in eliminating channel interference and hostile jamming interference, especially random jamming. Simulation examples are provided to illustrate the performance of the proposed schemes. The proposed scheme provides a promising solution for secure and efficient spectrum sharing among different users and services in cognitive networks.

  11. Topological Quantum Hashing with the Icosahedral Group

    Science.gov (United States)

    Burrello, Michele; Xu, Haitan; Mussardo, Giuseppe; Wan, Xin

    2010-04-01

    We study an efficient algorithm to hash any single-qubit gate into a braid of Fibonacci anyons represented by a product of icosahedral group elements. By representing the group elements by braid segments of different lengths, we introduce a series of pseudogroups. Joining these braid segments in a renormalization group fashion, we obtain a Gaussian unitary ensemble of random-matrix representations of braids. With braids of length O(log⁡2(1/ɛ)), we can approximate all SU(2) matrices to an average error ɛ with a cost of O(log⁡(1/ɛ)) in time. The algorithm is applicable to generic quantum compiling.

  12. Distributed hash table theory, platforms and applications

    CERN Document Server

    Zhang, Hao; Xie, Haiyong; Yu, Nenghai

    2013-01-01

    This SpringerBrief summarizes the development of Distributed Hash Table in both academic and industrial fields. It covers the main theory, platforms and applications of this key part in distributed systems and applications, especially in large-scale distributed environments. The authors teach the principles of several popular DHT platforms that can solve practical problems such as load balance, multiple replicas, consistency and latency. They also propose DHT-based applications including multicast, anycast, distributed file systems, search, storage, content delivery network, file sharing and c

  13. Side channel analysis of some hash based MACs: A response to SHA-3 requirements

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2008-01-01

    The forthcoming NIST's Advanced Hash Standard (AHS) competition to select SHA-3 hash function requires that each candidate hash function submission must have at least one construction to support FIPS 198 HMAC application. As part of its evaluation, NIST is aiming to select either a candidate hash...

  14. Optic flow-based collision-free strategies: From insects to robots.

    Science.gov (United States)

    Serres, Julien R; Ruffier, Franck

    2017-09-01

    Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. On preserving robustness-false alarm tradeoff in media hashing

    Science.gov (United States)

    Roy, S.; Zhu, X.; Yuan, J.; Chang, E.-C.

    2007-01-01

    This paper discusses one of the important issues in generating a robust media hash. Robustness of a media hashing algorithm is primarily determined by three factors, (1) robustness-false alarm tradeoff achieved by the chosen feature representation, (2) accuracy of the bit extraction step and (3) the distance measure used to measure similarity (dissimilarity) between two hashes. The robustness-false alarm tradeoff in feature space is measured by a similarity (dissimilarity) measure and it defines a limit on the performance of the hashing algorithm. The distance measure used to compute the distance between the hashes determines how far this tradeoff in the feature space is preserved through the bit extraction step. Hence the bit extraction step is crucial, in defining the robustness of a hashing algorithm. Although this is recognized as an important requirement by all, to our knowledge there is no work in the existing literature that elucidates the effcacy of their algorithm based on their effectiveness in improving this tradeoff compared to other methods. This paper specifically demonstrates the kind of robustness false alarm tradeoff achieved by existing methods and proposes a method for hashing that clearly improves this tradeoff.

  16. Secure fingerprint hashes using subsets of local structures

    Science.gov (United States)

    Effland, Tom; Schneggenburger, Mariel; Schuler, Jim; Zhang, Bingsheng; Hartloff, Jesse; Dobler, Jimmy; Tulyakov, Sergey; Rudra, Atri; Govindaraju, Venu

    2014-05-01

    In order to fulfill the potential of fingerprint templates as the basis for authentication schemes, one needs to design a hash function for fingerprints that achieves acceptable matching accuracy and simultaneously has provable security guarantees, especially for parameter regimes that are needed to match fingerprints in practice. While existing matching algorithms can achieve impressive matching accuracy, they have no security guarantees. On the other hand, provable secure hash functions have bad matching accuracy and/or do not guarantee security when parameters are set to practical values. In this work, we present a secure hash function that has the best known tradeoff between security guarantees and matching accuracy. At a high level, our hash function is simple: we apply an off-the shelf hash function on certain collections of minutia points (in particular, triplets of minutia triangles"). However, to realize the potential of this scheme, we have to overcome certain theoretical and practical hurdles. In addition to the novel idea of combining clustering ideas from matching algorithms with ideas from the provable security of hash functions, we also apply an intermediate translation-invariant but rotation-variant map to the minutia points before applying the hash function. This latter idea helps improve the tradeoff between matching accuracy and matching efficiency.

  17. Supervised hashing using graph cuts and boosted decision trees.

    Science.gov (United States)

    Lin, Guosheng; Shen, Chunhua; Hengel, Anton van den

    2015-11-01

    To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data.

  18. Source Coding Using Families of Universal Hash Functions

    OpenAIRE

    Koga, Hiroki

    2007-01-01

    This correspondence is concerned with new connections between source coding and two kinds of families of hash functions known as the families of universal hash functions and N-strongly universal hash functions, where N ges 2 is an integer. First, it is pointed out that such families contain classes of well-known source codes such as bin codes and linear codes. Next, performance of a source coding scheme using either of the two kinds of families is evaluated. An upper bound on the expectation ...

  19. Hash-tree反碰撞算法%Hash-tree Anti-collision Algorithm

    Institute of Scientific and Technical Information of China (English)

    张虹; 韩磊; 马海波

    2007-01-01

    针对EDFSA算法标签识别效率低以及二叉树搜索需检测碰撞准确位置等问题,提出了Hash-tree反碰撞算法.分析了算法的关键问题,确定了算法策略,进行了算法设计,证明了Hash-tree反碰撞算法识别效率期望值在36.8%~100%之间,优于EDFSA算法.仿真验证表明,该算法在识别效率方面有新突破,特别是在识别大量标签时优势明显.

  20. Distributed Hash Tables: Design and Applications

    Science.gov (United States)

    Chan, C.-F. Michael; Chan, S.-H. Gary

    The tremendous growth of the Internet and large-scale applications such as file sharing and multimedia streaming require the support of efficient search on objects. Peer-to-peer approaches have been proposed to provide this search mechanism scalably. One such approach is the distributed hash table (DHT), a scalable, efficient, robust and self-organizing routing overlay suitable for Internet-size deployment. In this chapter, we discuss how scalable routing is achieved under node dynamics in DHTs. We also present several applications which illustrate the power of DHTs in enabling large-scale peer-to-peer applications. Since wireless networks are becoming increasingly popular, we also discuss the issues of deploying DHTs and various solutions in such networks.

  1. Homomorphic Hashing for Sparse Coefficient Extraction

    CERN Document Server

    Kaski, Petteri; Nederlof, Jesper

    2012-01-01

    We study classes of Dynamic Programming (DP) algorithms which, due to their algebraic definitions, are closely related to coefficient extraction methods. DP algorithms can easily be modified to exploit sparseness in the DP table through memorization. Coefficient extraction techniques on the other hand are both space-efficient and parallelisable, but no tools have been available to exploit sparseness. We investigate the systematic use of homomorphic hash functions to combine the best of these methods and obtain improved space-efficient algorithms for problems including LINEAR SAT, SET PARTITION, and SUBSET SUM. Our algorithms run in time proportional to the number of nonzero entries of the last segment of the DP table, which presents a strict improvement over sparse DP. The last property also gives an improved algorithm for CNF SAT with sparse projections.

  2. Handwriting: Feature Correlation Analysis for Biometric Hashes

    Directory of Open Access Journals (Sweden)

    Ralf Steinmetz

    2004-04-01

    Full Text Available In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation, the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  3. Chameleon Hashes Without Key Exposure Based on Factoring

    Institute of Scientific and Technical Information of China (English)

    Wei Gao; Xue-Li Wang; Dong-Qing Xie

    2007-01-01

    Chameleon hash is the main primitive to construct a chameleon signature scheme which provides non-repudiation and non-transferability simultaneously.However, the initial chameleon hash schemes suffer from the key exposure problem: non-transferability is based on an unsound assumption that the designated receiver is willing to abuse his private key regardless of its exposure.Recently, several key-exposure-free chameleon hashes have been constructed based on RSA assumption and SDH (strong Diffie-Hellman) assumption.In this paper, we propose a factoring-based chameleon hash scheme which is proven to enjoy all advantages of the previous schemes.In order to support it, we propose a variant Rabin signature scheme which is proven secure against a new type of attack in the random oracle model.

  4. Topological quantum gate construction by iterative pseudogroup hashing

    Science.gov (United States)

    Burrello, Michele; Mussardo, Giuseppe; Wan, Xin

    2011-02-01

    We describe the hashing technique for obtaining a fast approximation of a target quantum gate in the unitary group SU(2) represented by a product of the elements of a universal basis. The hashing exploits the structure of the icosahedral group (or other finite subgroups of SU(2)) and its pseudogroup approximations to reduce the search within a small number of elements. One of the main advantages of the pseudogroup hashing is the possibility of iterating to obtain more accurate representations of the targets in the spirit of the renormalization group approach. We describe the iterative pseudogroup hashing algorithm using the universal basis given by the braidings of Fibonacci anyons. An analysis of the efficiency of the iterations based on the random matrix theory indicates that the runtime and braid length scale poly-logarithmically with the final error, comparing favorably to the Solovay-Kitaev algorithm.

  5. Indexing Algorithm Based on Improved Sparse Local Sensitive Hashing

    Directory of Open Access Journals (Sweden)

    Yiwei Zhu

    2014-01-01

    Full Text Available In this article, we propose a new semantic hashing algorithm to address the new-merging problems such as the difficulty in similarity measurement brought by high-dimensional data. Based on local sensitive hashing and spectral hashing, we introduce sparse principal component analysis (SPCA to reduce the dimension of the data set which exclude the redundancy in the parameter list, and thus make high dimensional indexing and retrieval faster and more efficient. In the meanwhile, we employ Boosting algorithm in machine learning to determine the threshold of hashing, so as to improve its adaptive ability to real data and extend its range of application. According to experiments, this method not only has satisfying performance on multimedia data sets such as images and texts, but also performs better than the common indexing methods. 

  6. Exploring Butane Hash Oil Use: A Research Note.

    Science.gov (United States)

    Miller, Bryan Lee; Stogner, John M; Miller, J Mitchell

    2016-01-01

    The practice of "dabbing" has seen an apparent upswing in popularity in recent months within American drug subcultures. "Dabbing" refers to the use of butane-extracted marijuana products that offer users much higher tetrahydrocannabinol content than flower cannabis through a single dosage process. Though considerably more potent than most marijuana strains in their traditional form, these butane hash oil products and the practice of dabbing are underexplored in the empirical literature, especially in prohibition states. A mixed-methods evaluation of a federally funded treatment program for drug-involved offenders identified a small sample (n = 6) of butane hash oil users and generated focus group interview data on the nature of butane hash oil, the practice of dabbing, and its effects. Findings inform discussion of additional research needed on butane hash oil and its implications for the ongoing marijuana legalization debate, including the diversity of users, routes of administration, and differences between retail/medical and prohibition states.

  7. Topological quantum gate construction by iterative pseudogroup hashing

    Energy Technology Data Exchange (ETDEWEB)

    Burrello, Michele; Mussardo, Giuseppe [International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste (Italy); Wan Xin, E-mail: burrello@sissa.it, E-mail: mussardo@sissa.it [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of)

    2011-02-15

    We describe the hashing technique for obtaining a fast approximation of a target quantum gate in the unitary group SU(2) represented by a product of the elements of a universal basis. The hashing exploits the structure of the icosahedral group (or other finite subgroups of SU(2)) and its pseudogroup approximations to reduce the search within a small number of elements. One of the main advantages of the pseudogroup hashing is the possibility of iterating to obtain more accurate representations of the targets in the spirit of the renormalization group approach. We describe the iterative pseudogroup hashing algorithm using the universal basis given by the braidings of Fibonacci anyons. An analysis of the efficiency of the iterations based on the random matrix theory indicates that the runtime and braid length scale poly-logarithmically with the final error, comparing favorably to the Solovay-Kitaev algorithm.

  8. A scalable lock-free hash table with open addressing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Puge; Karlsson, Sven

    2016-01-01

    Concurrent data structures synchronized with locks do not scale well with the number of threads. As more scalable alternatives, concurrent data structures and algorithms based on widely available, however advanced, atomic operations have been proposed. These data structures allow for correct...... and concurrent operations without any locks. In this paper, we present a new fully lock-free open addressed hash table with a simpler design than prior published work. We split hash table insertions into two atomic phases: first inserting a value ignoring other concurrent operations, then in the second phase...... misses respectively, leading to 21% fewer memory stall cycles. Our experiments show that our hash table scales close to linearly with the number of threads and outperforms, in throughput, other lock-free hash tables by 19%...

  9. An online algorithm for generating fractal hash chains applied to digital chains of custody

    CERN Document Server

    Bradford, Phillip G

    2007-01-01

    This paper gives an online algorithm for generating Jakobsson's fractal hash chains. Our new algorithm compliments Jakobsson's fractal hash chain algorithm for preimage traversal since his algorithm assumes the entire hash chain is precomputed and a particular list of Ceiling(log n) hash elements or pebbles are saved. Our online algorithm for hash chain traversal incrementally generates a hash chain of n hash elements without knowledge of n before it starts. For any n, our algorithm stores only the Ceiling(log n) pebbles which are precisely the inputs for Jakobsson's amortized hash chain preimage traversal algorithm. This compact representation is useful to generate, traverse, and store a number of large digital hash chains on a small and constrained device. We also give an application using both Jakobsson's and our new algorithm applied to digital chains of custody for validating dynamically changing forensics data.

  10. Chaotic keyed hash function based on feedforward feedback nonlinear digital filter

    Science.gov (United States)

    Zhang, Jiashu; Wang, Xiaomin; Zhang, Wenfang

    2007-03-01

    In this Letter, we firstly construct an n-dimensional chaotic dynamic system named feedforward feedback nonlinear filter (FFNF), and then propose a novel chaotic keyed hash algorithm using FFNF. In hashing process, the original message is modulated into FFNF's chaotic trajectory by chaotic shift keying (CSK) mode, and the final hash value is obtained by the coarse-graining quantization of chaotic trajectory. To expedite the avalanche effect of hash algorithm, a cipher block chaining (CBC) mode is introduced. Theoretic analysis and numerical simulations show that the proposed hash algorithm satisfies the requirement of keyed hash function, and it is easy to implement by the filter structure.

  11. The collision-free trajectory planning for the space robot to capture a target based on the wavelet interpolation algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the research of path planning for manipulators with many DOF, generally there is a problem in most traditional methods, which is that their computational cost (time and memory space) increases exponentially as DOF or resolution of the discrete configuration space increases. So this paper presents the collision-free trajectory planning for the space robot to capture a target based on the wavelet interpolation algorithm. We made wavelet sample on the desired trajectory of the manipulator' s end-effector to do trajectory planning by use of the proposed wavelet interpolation formula, and then derived joint vectors from the trajectory information of the endeffector based on the fixed-attitude-restrained generalized Jacobian matrix of multi-arm coordinated motion, so as to control the manipulator to capture a static body along the desired collision-free trajectory. The method overcomes the shortcomings of the typical methods, and the desired trajectory of the end-effector can be any kind of complex nonlinear curve. The algorithm is simple and highly effective and the real trajectory is close to the desired trajectory. In simulation, the planar dual-arm three DOF space robot is used to demonstrate the proposed method, and it shows that the algorithm is feasible.

  12. Optimal hash functions for approximate closest pairs on the n-cube

    CERN Document Server

    Gordon, Daniel M; Ostapenko, Peter

    2008-01-01

    One way to find closest pairs in large datasets is to use hash functions. In recent years locality-sensitive hash functions for various metrics have been given: projecting an n-cube onto k bits is simple hash function that performs well. In this paper we investigate alternatives to projection. For various parameters hash functions given by complete decoding algorithms for codes work better, and asymptotically random codes perform better than projection.

  13. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  14. Internet traffic load balancing using dynamic hashing with flow volume

    Science.gov (United States)

    Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.

    2002-07-01

    Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.

  15. Maximum Bipartite Matching Size And Application to Cuckoo Hashing

    CERN Document Server

    Kanizo, Yossi; Keslassy, Isaac

    2010-01-01

    Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...

  16. 76 FR 11433 - Federal Transition To Secure Hash Algorithm (SHA)-256

    Science.gov (United States)

    2011-03-02

    ... Hash Algorithm (SHA)-256 AGENCY: Department of Defense (DoD), General Services Administration (GSA... agencies about ways for the acquisition community to transition to Secure Hash Algorithm SHA-256. SHA-256... Hash Algorithm SHA-256'' in all correspondence related to this public meeting. FOR FURTHER...

  17. Final report for LDRD Project 93633 : new hash function for data protection.

    Energy Technology Data Exchange (ETDEWEB)

    Draelos, Timothy John; Dautenhahn, Nathan; Schroeppel, Richard Crabtree; Tolk, Keith Michael; Orman, Hilarie (PurpleStreak, Inc.); Walker, Andrea Mae; Malone, Sean; Lee, Eric; Neumann, William Douglas; Cordwell, William R.; Torgerson, Mark Dolan; Anderson, Eric; Lanzone, Andrew J.; Collins, Michael Joseph; McDonald, Timothy Scott; Caskey, Susan Adele

    2009-03-01

    The security of the widely-used cryptographic hash function SHA1 has been impugned. We have developed two replacement hash functions. The first, SHA1X, is a drop-in replacement for SHA1. The second, SANDstorm, has been submitted as a candidate to the NIST-sponsored SHA3 Hash Function competition.

  18. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  19. XML Data Integrity Based on Concatenated Hash Function

    CERN Document Server

    Liu, Baolong; Yip, Jim

    2009-01-01

    Data integrity is the fundamental for data authentication. A major problem for XML data authentication is that signed XML data can be copied to another document but still keep signature valid. This is caused by XML data integrity protecting. Through investigation, the paper discovered that besides data content integrity, XML data integrity should also protect element location information, and context referential integrity under fine-grained security situation. The aim of this paper is to propose a model for XML data integrity considering XML data features. The paper presents an XML data integrity model named as CSR (content integrity, structure integrity, context referential integrity) based on a concatenated hash function. XML data content integrity is ensured using an iterative hash process, structure integrity is protected by hashing an absolute path string from root node, and context referential integrity is ensured by protecting context-related elements. Presented XML data integrity model can satisfy int...

  20. One-Time Password System with Infinite Nested Hash Chains

    Science.gov (United States)

    Eldefrawy, Mohamed Hamdy; Khan, Muhammad Khurram; Alghathbar, Khaled

    Hash chains have been used as OTP generators. Lamport hashes have an intensive computation cost and a chain length restriction. A solution for signature chains addressed this by involving public key techniques, which increased the average computation cost. Although a later idea reduced the user computation by sharing it with the host, it couldn't overcome the length limitation. The scheme proposed by Chefranov to eliminate the length restriction had a deficiency in the communication cost overhead. We here present an algorithm that overcomes all of these shortcomings by involving two different nested hash chains: one dedicated to seed updating and the other used for OTP production. Our algorithm provides forward and non-restricted OTP generation. We propose a random challenge-response operation mode. We analyze our proposal from the viewpoint of security and performance compared with the other algorithms.

  1. Parallel algorithm for target recognition using a multiclass hash database

    Science.gov (United States)

    Uddin, Mosleh; Myler, Harley R.

    1998-07-01

    A method for recognition of unknown targets using large databases of model targets is discussed. Our approach is based on parallel processing of multi-class hash databases that are generated off-line. A geometric hashing technique is used on feature points of model targets to create each class database. Bit level coding is then performed to represent the models in an image format. Parallelism is achieved during the recognition phase. Feature points of an unknown target are passed to parallel processors each accessing an individual class database. Each processor reads a particular class of hash data base and indexes feature points of the unknown target. A simple voting technique is applied to determine the best match model with the unknown. The paper discusses our technique and the results from testing with unknown FLIR targets.

  2. Perceptual image hashing based on virtual watermark detection.

    Science.gov (United States)

    Khelifi, Fouad; Jiang, Jianmin

    2010-04-01

    This paper proposes a new robust and secure perceptual image hashing technique based on virtual watermark detection. The idea is justified by the fact that the watermark detector responds similarly to perceptually close images using a non embedded watermark. The hash values are extracted in binary form with a perfect control over the probability distribution of the hash bits. Moreover, a key is used to generate pseudo-random noise whose real values contribute to the randomness of the feature vector with a significantly increased uncertainty of the adversary, measured by mutual information, in comparison with linear correlation. Experimentally, the proposed technique has been shown to outperform related state-of-the art techniques recently proposed in the literature in terms of robustness with respect to image processing manipulations and geometric attacks.

  3. Multiple hashes of single key with passcode for multiple accounts

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A human's e-life needs multiple offline and online accounts. It is a balance between usability and security to set keys or passwords for these multiple accounts. Password reuse has to be avoided due to the domino effect of malicious administrators and crackers. However, human memorability constrains the number of keys. Single sign-on server, key hashing, key strengthening and petname system are used in the prior arts to use only one key for multiple online accounts. The unique site keys are derived from the common master secret and specific domain name. These methods cannot be applied to offline accounts such as file encryption. We invent a new method and system applicable to offline and online accounts. It does not depend on HTTP server and domain name, but numeric 4-digit passcode, key hashing, key strengthening and hash truncation. Domain name is only needed to resist spoofing and phishing attacks of online accounts.

  4. Hash function construction using weighted complex dynamical networks

    Institute of Scientific and Technical Information of China (English)

    Song Yu-Rong; Jiang Guo-Ping

    2013-01-01

    A novel scheme to construct a hash function based on a weighted complex dynamical network (WCDN) generated from an original message is proposed in this paper.First,the original message is divided into blocks.Then,each block is divided into components,and the nodes and weighted edges are well defined from these components and their relations.Namely,the WCDN closely related to the original message is established.Furthermore,the node dynamics of the WCDN are chosen as a chaotic map.After chaotic iterations,quantization and exclusive-or operations,the fixed-length hash value is obtained.This scheme has the property that any tiny change in message can be diffused rapidly through the WCDN,leading to very different hash values.Analysis and simulation show that the scheme possesses good statistical properties,excellent confusion and diffusion,strong collision resistance and high efficiency.

  5. Almost Universal Hash Families are also Storage Enforcing

    CERN Document Server

    Husain, Mohammad Iftekhar; Rudra, Atri; Uurtamo, Steve

    2012-01-01

    We show that every almost universal hash function also has the storage enforcement property. Almost universal hash functions have found numerous applications and we show that this new storage enforcement property allows the application of almost universal hash functions in a wide range of remote verification tasks: (i) Proof of Secure Erasure (where we want to remotely erase and securely update the code of a compromised machine with memory-bounded adversary), (ii) Proof of Ownership (where a storage server wants to check if a client has the data it claims to have before giving access to deduplicated data) and (iii) Data possession (where the client wants to verify whether the remote storage server is storing its data). Specifically, storage enforcement guarantee in the classical data possession problem removes any practical incentive for the storage server to cheat the client by saving on storage space. The proof of our result relies on a natural combination of Kolmogorov Complexity and List Decoding. To the ...

  6. Message Encryption Using Deceptive Text and Randomized Hashing

    Directory of Open Access Journals (Sweden)

    VAMSIKRISHNA YENIKAPATI,

    2011-02-01

    Full Text Available In this paper a new approach for message encryption using the concept called deceptive text is proposed.In this scheme we don’t need send encrypted plain text to receiver, instead, we send a meaningful deceptive text and an encrypted special index file to message receiver.The original message is embedded in the meaningful deceptive text.The positions of the characters of the plain text in thedeceptive text are stored in the index file.The receiver decrypts the index file and gets back the original message from the received deceptive text. Authentication is achieved by verifying the hash value of the plaintext created by the Message Digest Algorithm at the receiver side.In order to prevent collision attcks on hashing algorithms that are intended for use with standard digital signature algorithms we provide an extra layer of security using randomized hashing method.

  7. A perceptual hashing method based on luminance features

    Science.gov (United States)

    Luo, Siqing

    2011-02-01

    With the rapid development of multimedia technology, content based searching and image authentication has become strong requirements. Image hashing technique has been proposed to meet them. In this paper, an RST (Rotation, Scaling, and Translation) resistant image hash algorithm is presented. In this method, the geometric distortions are extracted and adjusted by normalization. The features of the image are generated from the high-rank moments of luminance distribution. With the help of the efficient image representation capability of high-rank moments, the robustness and discrimination of proposed method are improved. The experimental results show that the proposed method is better than some existing methods in robustness under rotation attack.

  8. Collision-free Multiple Unmanned Combat Aerial Vehicles Cooperative Trajectory Planning for Time-critical Missions using Differential Flatness Approach

    Directory of Open Access Journals (Sweden)

    Xueqiang Gu

    2014-01-01

    Full Text Available This paper investigates the cooperative trajectory planning for multiple unmanned combat aerial vehicles in performing autonomous cooperative air-to-ground target attack missions. Firstly, the collision-free cooperative trajectory planning problem for time-critical missions is formulated as a cooperative trajectory optimal control problem (CTP-OCP, which is based on an approximate allowable attack region model, several constraints model, and a multi-criteria objective function. Next, a planning algorithm based on the differential flatness, B-spline curves and nonlinear programming is designed to solve the CTP-OCP. In particular, the notion of the virtual time is introduced to deal with the temporal constraints. Finally, the proposed approach is validated by two typical scenarios and the simulation results show the feasibility and effectiveness of the proposed planning approach.Defence Science Journal, Vol. 64, No. 1, January 2014, DOI:10.14429/dsj.64.2999

  9. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX......-hash-then-sign signature scheme, one has to solve a cryptanalytical task which is related to finding second preimages for the hash function. In this article, we will show how to use Dean's method of finding expandable messages for finding a second preimage in the Merkle-Damg{\\aa}rd hash function to existentially forge...

  10. The FPGA realization of the general cellular automata based cryptographic hash functions: Performance and effectiveness

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2014-01-01

    Full Text Available In the paper the author considers hardware implementation of the GRACE-H family general cellular automata based cryptographic hash functions. VHDL is used as a language and Altera FPGA as a platform for hardware implementation. Performance and effectiveness of the FPGA implementations of GRACE-H hash functions were compared with Keccak (SHA-3, SHA-256, BLAKE, Groestl, JH, Skein hash functions. According to the performed tests, performance of the hardware implementation of GRACE-H family hash functions significantly (up to 12 times exceeded performance of the hardware implementation of previously known hash functions, and effectiveness of that hardware implementation was also better (up to 4 times.

  11. Block-based image hashing with restricted blocking strategy for rotational robustness

    Science.gov (United States)

    Xiang, Shijun; Yang, Jianquan

    2012-12-01

    Image hashing is a potential solution for image content authentication (a desired image hashing algorithm should be robust to common image processing operations and various geometric distortions). In the literature, researchers pay more attention to block-based image hashing algorithms due to their robustness to common image processing operations (such as lossy compression, low-pass filtering, and additive noise). However, the block-based hashing strategies are sensitive to rotation processing operations. This indicates that the robustness of the block-based hashing methods against rotation operations is an important issue. Towards this direction, in this article we propose a restricted blocking strategy by investigating effect of two rotation operations on an image and its blocks in both theoretical and experimental ways. Furthermore, we apply the proposed blocking strategy for the recently reported non-negative matrix factorization (NMF) hashing. Experimental results have demonstrated the validity of the block-based hashing algorithms with restricted blocking strategy for rotation operations.

  12. Fast and accurate hashing via iterative nearest neighbors expansion.

    Science.gov (United States)

    Jin, Zhongming; Zhang, Debing; Hu, Yao; Lin, Shiding; Cai, Deng; He, Xiaofei

    2014-11-01

    Recently, the hashing techniques have been widely applied to approximate the nearest neighbor search problem in many real applications. The basic idea of these approaches is to generate binary codes for data points which can preserve the similarity between any two of them. Given a query, instead of performing a linear scan of the entire data base, the hashing method can perform a linear scan of the points whose hamming distance to the query is not greater than rh , where rh is a constant. However, in order to find the true nearest neighbors, both the locating time and the linear scan time are proportional to O(∑i=0(rh)(c || i)) ( c is the code length), which increase exponentially as rh increases. To address this limitation, we propose a novel algorithm named iterative expanding hashing in this paper, which builds an auxiliary index based on an offline constructed nearest neighbor table to avoid large rh . This auxiliary index can be easily combined with all the traditional hashing methods. Extensive experimental results over various real large-scale datasets demonstrate the superiority of the proposed approach.

  13. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    Dijck, van Harry; Heijden, van der Ferdinand

    2003-01-01

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the p

  14. Interframe hierarchical vector quantization using hashing-based reorganized codebook

    Science.gov (United States)

    Choo, Chang Y.; Cheng, Che H.; Nasrabadi, Nasser M.

    1995-12-01

    Real-time multimedia communication over PSTN (Public Switched Telephone Network) or wireless channel requires video signals to be encoded at the bit rate well below 64 kbits/second. Most of the current works on such very low bit rate video coding are based on H.261 or H.263 scheme. The H.263 encoding scheme, for example, consists mainly of motion estimation and compensation, discrete cosine transform, and run and variable/fixed length coding. Vector quantization (VQ) is an efficient and alternative scheme for coding at very low bit rate. One such VQ code applied to video coding is interframe hierarchical vector quantization (IHVQ). One problem of IHVQ, and VQ in general, is the computational complexity due to codebook search. A number of techniques have been proposed to reduce the search time which include tree-structured VQ, finite-state VQ, cache VQ, and hashing based codebook reorganization. In this paper, we present an IHVQ code with a hashing based scheme to reorganize the codebook so that codebook search time, and thus encoding time, can be significantly reduced. We applied the algorithm to the same test environment as in H.263 and evaluated coding performance. It turned out that the performance of the proposed scheme is significantly better than that of IHVQ without hashed codebook. Also, the performance of the proposed scheme was comparable to and often better than that of the H.263, due mainly to hashing based reorganized codebook.

  15. Hash function based on the generalized Henon map

    Institute of Scientific and Technical Information of China (English)

    Zheng Fan; Tian Xiao-Jian; Li Xue-Yan; Wu Bin

    2008-01-01

    A new Hash function based on the generalized Henon map is proposed.We have obtained a binary sequence with excellent pseudo-random characteristics through improving the sequence generated by the generalized Henon map,and use it to construct Hash function.First we divide the message into groups,and then carry out the Xor operation between the ASCII value of each group and the binary sequence,the result can be used as the initial values of the next loop.Repeat the procedure until all the groups have been processed,and the final binary sequence is the Hash value.In the scheme,the initial values of the generalized Henon map are used as the secret key and the messages are mapped to Hash values with a designated length.Simulation results show that the proposed scheme has strong diffusion and confusion capability,good collision resistance,large key space,extreme sensitivity to message and secret key,and it is easy to be realized and extended.

  16. Object recognition with stereo vision and geometric hashing

    NARCIS (Netherlands)

    van Dijck, H.A.L.; van der Heijden, Ferdinand

    In this paper we demonstrate a method to recognize 3D objects and to estimate their pose. For that purpose we use a combination of stereo vision and geometric hashing. Stereo vision is used to generate a large number of 3D low level features, of which many are spurious because at that stage of the

  17. On the Cell Probe Complexity of Membership and Perfect Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    2001-01-01

    We study two fundamental static data structure problems, membership and perfect hashing, in Yao's cell probe model. The first space and bit probe optimal worst case upper bound is given for the membership problem. We also give a new efficient membership scheme where the query algorithm makes just...

  18. Practical Attacks on AES-like Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Kölbl, Stefan; Rechberger, Christian

    2015-01-01

    Despite the great interest in rebound attacks on AES-like hash functions since 2009, we report on a rather generic, albeit keyschedule-dependent, algorithmic improvement: A new message modification technique to extend the inbound phase, which even for large internal states makes it possible to dr...

  19. The LabelHash algorithm for substructure matching

    Directory of Open Access Journals (Sweden)

    Bryant Drew H

    2010-11-01

    Full Text Available Abstract Background There is an increasing number of proteins with known structure but unknown function. Determining their function would have a significant impact on understanding diseases and designing new therapeutics. However, experimental protein function determination is expensive and very time-consuming. Computational methods can facilitate function determination by identifying proteins that have high structural and chemical similarity. Results We present LabelHash, a novel algorithm for matching substructural motifs to large collections of protein structures. The algorithm consists of two phases. In the first phase the proteins are preprocessed in a fashion that allows for instant lookup of partial matches to any motif. In the second phase, partial matches for a given motif are expanded to complete matches. The general applicability of the algorithm is demonstrated with three different case studies. First, we show that we can accurately identify members of the enolase superfamily with a single motif. Next, we demonstrate how LabelHash can complement SOIPPA, an algorithm for motif identification and pairwise substructure alignment. Finally, a large collection of Catalytic Site Atlas motifs is used to benchmark the performance of the algorithm. LabelHash runs very efficiently in parallel; matching a motif against all proteins in the 95% sequence identity filtered non-redundant Protein Data Bank typically takes no more than a few minutes. The LabelHash algorithm is available through a web server and as a suite of standalone programs at http://labelhash.kavrakilab.org. The output of the LabelHash algorithm can be further analyzed with Chimera through a plugin that we developed for this purpose. Conclusions LabelHash is an efficient, versatile algorithm for large-scale substructure matching. When LabelHash is running in parallel, motifs can typically be matched against the entire PDB on the order of minutes. The algorithm is able to identify

  20. Predicting casualties implied by TIPs

    Science.gov (United States)

    Trendafiloski, G.; Wyss, M.; Wyss, B. M.

    2009-12-01

    When an earthquake is predicted, forecast, or expected with a higher than normal probability, losses are implied. We estimated the casualties (fatalities plus injured) that should be expected if earthquakes in TIPs (locations of Temporarily Increased Probability of earthquakes) defined by Kossobokov et al. (2009) should occur. We classified the predictions of losses into the categories red (more than 400 fatalities or more than 1,000 injured), yellow (between 100 and 400 fatalities), green (fewer than 100 fatalities), and gray (undetermined). TIPs in Central Chile, the Philippines, Papua, and Taiwan are in the red class, TIPs in Southern Sumatra, Nicaragua, Vanatu, and Honshu in the yellow class, and TIPs in Tonga, Loyalty Islands, Vanatu, S. Sandwich Islands, Banda Sea, and the Kuriles, are classified as green. TIPs where the losses depend moderately on the assumed point of major energy release were classified as yellow; TIPs such as in the Talaud Islands and in Tonga, where the losses depend very strongly on the location of the epicenter, were classified as gray. The accuracy of loss estimates after earthquakes with known hypocenter and magnitude are affected by uncertainties in transmission and soil properties, the composition of the building stock, the population present, and the method by which the numbers of casualties are calculated. In the case of TIPs, uncertainties in magnitude and location are added, thus we calculate losses for a range of these two parameters. Therefore, our calculations can only be considered order of magnitude estimates. Nevertheless, our predictions can come to within a factor of two of the observed numbers, as in the case of the M7.6 earthquake of October 2005 in Pakistan that resulted in 85,000 fatalities (Wyss, 2005). In subduction zones, the geometrical relationship between the earthquake source capable of a great earthquake and the population is clear because there is only one major fault plane available, thus the epicentral

  1. A Robust Hash Function Using Cross-Coupled Chaotic Maps with Absolute-Valued Sinusoidal Nonlinearity

    Directory of Open Access Journals (Sweden)

    Wimol San-Um

    2016-01-01

    Full Text Available This paper presents a compact and effective chaos-based keyed hash function implemented by a cross-coupled topology of chaotic maps, which employs absolute-value of sinusoidal nonlinearity, and offers robust chaotic regions over broad parameter spaces with high degree of randomness through chaoticity measurements using the Lyapunov exponent. Hash function operations involve an initial stage when the chaotic map accepts initial conditions and a hashing stage that accepts input messages and generates the alterable-length hash values. Hashing performances are evaluated in terms of original message condition changes, statistical analyses, and collision analyses. The results of hashing performances show that the mean changed probabilities are very close to 50%, and the mean number of bit changes is also close to a half of hash value lengths. The collision tests reveal the mean absolute difference of each character values for the hash values of 128, 160 and 256 bits are close to the ideal value of 85.43. The proposed keyed hash function enhances the collision resistance, comparing to MD5 and SHA1, and the other complicated chaos-based approaches. An implementation of hash function Android application is demonstrated.

  2. Study of the similarity function in Indexing-First-One hashing

    Science.gov (United States)

    Lai, Y.-L.; Jin, Z.; Goi, B.-M.; Chai, T.-Y.

    2017-06-01

    The recent proposed Indexing-First-One (IFO) hashing is a latest technique that is particularly adopted for eye iris template protection, i.e. IrisCode. However, IFO employs the measure of Jaccard Similarity (JS) initiated from Min-hashing has yet been adequately discussed. In this paper, we explore the nature of JS in binary domain and further propose a mathematical formulation to generalize the usage of JS, which is subsequently verified by using CASIA v3-Interval iris database. Our study reveals that JS applied in IFO hashing is a generalized version in measure two input objects with respect to Min-Hashing where the coefficient of JS is equal to one. With this understanding, IFO hashing can propagate the useful properties of Min-hashing, i.e. similarity preservation, thus favorable for similarity searching or recognition in binary space.

  3. One-way hash function construction based on the spatiotemporal chaotic system

    Institute of Scientific and Technical Information of China (English)

    Luo Yu-Ling; Du Ming-Hui

    2012-01-01

    Based on the spatiotemporal chaotic system,a novel algorithm for constructing a one-way hash function is proposed and analysed.The message is divided into fixed length blocks.Each message block is processed by the hash compression function in parallel.The hash compression is constructed based on the spatiotemporal chaos.In each message block,the ASCII code and its position in the whole message block chain constitute the initial conditions and the key of the hash compression function.The final hash value is generated by further compressing the mixed result of all the hash compression values.Theoretic analyses and numerical simulations show that the proposed algorithm presents high sensitivity to the message and key,good statistical properties,and strong collision resistance.

  4. Spectral Multimodal Hashing and Its Application to Multimedia Retrieval.

    Science.gov (United States)

    Zhen, Yi; Gao, Yue; Yeung, Dit-Yan; Zha, Hongyuan; Li, Xuelong

    2016-01-01

    In recent years, multimedia retrieval has sparked much research interest in the multimedia, pattern recognition, and data mining communities. Although some attempts have been made along this direction, performing fast multimodal search at very large scale still remains a major challenge in the area. While hashing-based methods have recently achieved promising successes in speeding-up large-scale similarity search, most existing methods are only designed for uni-modal data, making them unsuitable for multimodal multimedia retrieval. In this paper, we propose a new hashing-based method for fast multimodal multimedia retrieval. The method is based on spectral analysis of the correlation matrix of different modalities. We also develop an efficient algorithm that learns some parameters from the data distribution for obtaining the binary codes. We empirically compare our method with some state-of-the-art methods on two real-world multimedia data sets.

  5. Perceptual image hashing via feature points: performance evaluation and tradeoffs.

    Science.gov (United States)

    Monga, Vishal; Evans, Brian L

    2006-11-01

    We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification.

  6. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  7. A Robust Image Hashing Algorithm Resistant Against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2013-12-01

    Full Text Available This paper proposes a robust image hashing method which is robust against common image processing attacks and geometric distortion attacks. In order to resist against geometric attacks, the log-polar mapping (LPM and contourlet transform are employed to obtain the low frequency sub-band image. Then the sub-band image is divided into some non-overlapping blocks, and low and middle frequency coefficients are selected from each block after discrete cosine transform. The singular value decomposition (SVD is applied in each block to obtain the first digit of the maximum singular value. Finally, the features are scrambled and quantized as the safe hash bits. Experimental results show that the algorithm is not only resistant against common image processing attacks and geometric distortion attacks, but also discriminative to content changes.

  8. A Novel Digital Signature Algorithm based on Biometric Hash

    Directory of Open Access Journals (Sweden)

    Shivangi Saxena

    2017-01-01

    Full Text Available Digital Signature protects the document`s integrity and binds the authenticity of the user who have signed. Present Digital Signature algorithm confirms authenticity but it does not ensure secrecy of the data. Techniques like encryption and decryption are needed to be used for this purpose. Biometric security has been a useful way for authentication and security as it provides a unique identity of the user. In this paper we have discussed the user authentication process and development of digital signatures. Authentication was based on hash functions which uses biometric features. Hash codes are being used to maintain the integrity of the document which is digitally signed. For security purpose, Encryption and Decryption techniques are used to develop a bio -cryptosystem. User information when gets concatenated with feature vector of biometric data, which actually justifies the sense of authentication. Various online or offline transaction where authenticity and integrity is the top most priority can make use of this development.

  9. Modeling Conservative Updates in Multi-Hash Approximate Count Sketches

    OpenAIRE

    2012-01-01

    Multi-hash-based count sketches are fast and memory efficient probabilistic data structures that are widely used in scalable online traffic monitoring applications. Their accuracy significantly improves with an optimization, called conservative update, which is especially effective when the aim is to discriminate a relatively small number of heavy hitters in a traffic stream consisting of an extremely large number of flows. Despite its widespread application, a thorough u...

  10. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  11. One-way hash function based on hyper-chaotic cellular neural network

    Institute of Scientific and Technical Information of China (English)

    Yang Qun-Ting; Gao Tie-Gang

    2008-01-01

    The design of an efficient one-way hash function with good performance is a hot spot in modern cryptography researches. In this paper, a hash function construction method based on cell neural network with hyper-chaos characteristics is proposed. First, the chaos sequence is gotten by iterating cellular neural network with Runge-Kutta algorithm, and then the chaos sequence is iterated with the message. The hash code is obtained through the corresponding transform of the latter chaos sequence. Simulation and analysis demonstrate that the new method has the merit of convenience, high sensitivity to initial values, good hash performance, especially the strong stability.

  12. The suffix-free-prefix-free hash function construction and its indifferentiability security analysis

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Knudsen, Lars R.

    2012-01-01

    In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value $$(IV)$$ of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD......) strengthening in the padding functionality of the hash functions. We propose a generic $$n$$-bit-iterated hash function framework based on an $$n$$-bit compression function called suffix-free-prefix-free (SFPF) that works for arbitrary $$IV$$s and does not possess MD strengthening. We formally prove that SFPF...

  13. Locality-Sensitive Hashing for Chi2 distance.

    Science.gov (United States)

    Gorisse, David; Cord, Matthieu; Precioso, Frederic

    2012-02-01

    In the past 10 years, new powerful algorithms based on efficient data structures have been proposed to solve the problem of Nearest Neighbors search (or Approximate Nearest Neighbors search). If the Euclidean Locality Sensitive Hashing algorithm, which provides approximate nearest neighbors in a euclidean space with sublinear complexity, is probably the most popular, the euclidean metric does not always provide as accurate and as relevant results when considering similarity measure as the Earth-Mover Distance and 2 distances. In this paper, we present a new LSH scheme adapted to 2 distance for approximate nearest neighbors search in high-dimensional spaces. We define the specific hashing functions, we prove their local-sensitivity, and compare, through experiments, our method with the Euclidean Locality Sensitive Hashing algorithm in the context of image retrieval on real image databases. The results prove the relevance of such a new LSH scheme either providing far better accuracy in the context of image retrieval than euclidean scheme for an equivalent speed, or providing an equivalent accuracy but with a high gain in terms of processing speed.

  14. Hash Based Least Significant Bit Technique For Video Steganography

    Directory of Open Access Journals (Sweden)

    Prof. Dr. P. R. Deshmukh ,

    2014-01-01

    Full Text Available The Hash Based Least Significant Bit Technique For Video Steganography deals with hiding secret message or information within a video.Steganography is nothing but the covered writing it includes process that conceals information within other data and also conceals the fact that a secret message is being sent.Steganography is the art of secret communication or the science of invisible communication. In this paper a Hash based least significant bit technique for video steganography has been proposed whose main goal is to embed a secret information in a particular video file and then extract it using a stego key or password. In this Least Significant Bit insertion method is used for steganography so as to embed data in cover video with change in the lower bit.This LSB insertion is not visible.Data hidding is the process of embedding information in a video without changing its perceptual quality. The proposed method involve with two terms that are Peak Signal to Noise Ratio (PSNR and the Mean Square Error (MSE .This two terms measured between the original video files and steganographic video files from all video frames where a distortion is measured using PSNR. A hash function is used to select the particular position for insertion of bits of secret message in LSB bits.

  15. An enhanced dynamic hash TRIE algorithm for lexicon search

    Science.gov (United States)

    Yang, Lai; Xu, Lida; Shi, Zhongzhi

    2012-11-01

    Information retrieval (IR) is essential to enterprise systems along with growing orders, customers and materials. In this article, an enhanced dynamic hash TRIE (eDH-TRIE) algorithm is proposed that can be used in a lexicon search in Chinese, Japanese and Korean (CJK) segmentation and in URL identification. In particular, the eDH-TRIE algorithm is suitable for Unicode retrieval. The Auto-Array algorithm and Hash-Array algorithm are proposed to handle the auxiliary memory allocation; the former changes its size on demand without redundant restructuring, and the latter replaces linked lists with arrays, saving the overhead of memory. Comparative experiments show that the Auto-Array algorithm and Hash-Array algorithm have better spatial performance; they can be used in a multitude of situations. The eDH-TRIE is evaluated for both speed and storage and compared with the naïve DH-TRIE algorithms. The experiments show that the eDH-TRIE algorithm performs better. These algorithms reduce memory overheads and speed up IR.

  16. Hetero-manifold Regularisation for Cross-modal Hashing.

    Science.gov (United States)

    Zheng, Feng; Tang, Yi; Shao, Ling

    2016-12-28

    Recently, cross-modal search has attracted considerable attention but remains a very challenging task because of the integration complexity and heterogeneity of the multi-modal data. To address both challenges, in this paper, we propose a novel method termed hetero-manifold regularisation (HMR) to supervise the learning of hash functions for efficient cross-modal search. A hetero-manifold integrates multiple sub-manifolds defined by homogeneous data with the help of cross-modal supervision information. Taking advantages of the hetero-manifold, the similarity between each pair of heterogeneous data could be naturally measured by three order random walks on this hetero-manifold. Furthermore, a novel cumulative distance inequality defined on the hetero-manifold is introduced to avoid the computational difficulty induced by the discreteness of hash codes. By using the inequality, cross-modal hashing is transformed into a problem of hetero-manifold regularised support vector learning. Therefore, the performance of cross-modal search can be significantly improved by seamlessly combining the integrated information of the hetero-manifold and the strong generalisation of the support vector machine. Comprehensive experiments show that the proposed HMR achieve advantageous results over the state-of-the-art methods in several challenging cross-modal tasks.

  17. Mining histopathological images via composite hashing and online learning.

    Science.gov (United States)

    Zhang, Xiaofan; Yang, Lin; Liu, Wei; Su, Hai; Zhang, Shaoting

    2014-01-01

    With a continuous growing amount of annotated histopathological images, large-scale and data-driven methods potentially provide the promise of bridging the semantic gap between these images and their diagnoses. The purpose of this paper is to increase the scale at which automated systems can entail scalable analysis of histopathological images in massive databases. Specifically, we propose a principled framework to unify hashing-based image retrieval and supervised learning. Concretely, composite hashing is designed to simultaneously fuse and compress multiple high-dimensional image features into tens of binary hash bits, enabling scalable image retrieval with a very low computational cost. Upon a local data subset that retains the retrieved images, supervised learning methods are applied on-the-fly to model image structures for accurate classification. Our framework is validated thoroughly on 1120 lung microscopic tissue images by differentiating adenocarcinoma and squamous carcinoma. The average accuracy as 87.5% with only 17ms running time, which compares favorably with other commonly used methods.

  18. Secure Minutiae-Based Fingerprint Templates Using Random Triangle Hashing

    Science.gov (United States)

    Jin, Zhe; Jin Teoh, Andrew Beng; Ong, Thian Song; Tee, Connie

    Due to privacy concern on the widespread use of biometric authentication systems, biometric template protection has gained great attention in the biometric research recently. It is a challenging task to design a biometric template protection scheme which is anonymous, revocable and noninvertible while maintaining acceptable performance. Many methods have been proposed to resolve this problem, and cancelable biometrics is one of them. In this paper, we propose a scheme coined as Random Triangle Hashing which follows the concept of cancelable biometrics in the fingerprint domain. In this method, re-alignment of fingerprints is not required as all the minutiae are translated into a pre-defined 2 dimensional space based on a reference minutia. After that, the proposed Random Triangle hashing method is used to enforce the one-way property (non-invertibility) of the biometric template. The proposed method is resistant to minor translation error and rotation distortion. Finally, the hash vectors are converted into bit-strings to be stored in the database. The proposed method is evaluated using the public database FVC2004 DB1. An EER of less than 1% is achieved by using the proposed method.

  19. Forecasting with Option-Implied Information

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Chang, Bo Young

    2013-01-01

    This chapter surveys the methods available for extracting information from option prices that can be used in forecasting. We consider option-implied volatilities, skewness, kurtosis, and densities. More generally, we discuss how any forecasting object that is a twice differentiable function...... of the future realization of the underlying risky asset price can utilize option-implied information in a well-defined manner. Going beyond the univariate option-implied density, we also consider results on option-implied covariance, correlation and beta forecasting, as well as the use of option......-implied information in cross-sectional forecasting of equity returns. We discuss how option-implied information can be adjusted for risk premia to remove biases in forecasting regressions....

  20. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX...

  1. Cryptanalysis of the 10-Round Hash and Full Compression Function of SHAvite-3-512

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Leurent, Gaëtan; Mendel, Florian;

    2010-01-01

    In this paper, we analyze SHAvite-3-512 hash function, as proposed for round 2 of the SHA-3 competition. We present cryptanalytic results on 10 out of 14 rounds of the hash function SHAvite-3-512, and on the full 14 round compression function of SHAvite-3-512. We show a second preimage attack on ...

  2. Linear-XOR and Additive Checksums Don't Protect Damgard-Merkle Hashes

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John

    2008-01-01

    We consider the security of Damg\\aa{}rd-Merkle variants which compute linear-XOR or additive checksums over message blocks, intermediate hash values, or both, and process these checksums in computing the final hash value. We show that these Damg\\aa{}rd-Merkle variants gain almost no security agai...

  3. Cryptanalysis of Lin et al.'s Efficient Block-Cipher-Based Hash Function

    NARCIS (Netherlands)

    Liu, Bozhong; Gong, Zheng; Chen, Xiaohong; Qiu, Weidong; Zheng, Dong

    2010-01-01

    Hash functions are widely used in authentication. In this paper, the security of Lin et al.'s efficient block-cipher-based hash function is reviewed. By using Joux's multicollisions and Kelsey et al.'s expandable message techniques, we find the scheme is vulnerable to collision, preimage and second

  4. Metadata distribution algorithm based on directory hash in mass storage system

    Science.gov (United States)

    Wu, Wei; Luo, Dong-jian; Pei, Can-hao

    2008-12-01

    The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash (DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable operations. It enhances the scalability of mass storage system remarkably.

  5. Fast and Efficient Design of a PCA-Based Hash Function

    Directory of Open Access Journals (Sweden)

    Alaa Eddine Belfedhal

    2015-05-01

    Full Text Available We propose a simple and efficient hash function based on programmable elementary cellular automata. Cryptographic hash functions are important building blocks for many cryptographic protocols such as authentication and integrity verification. They have recently brought an exceptional research interest, especially after the increasing number of attacks against the widely used functions as MD5, SHA-1 and RIPEMD, causing a crucial need to consider new hash functions design and conception strategies. The proposed hash function is built using elementary cellular automata that are very suitable for cryptographic applications, due to their chaotic and complex behavior derived from simple rules interaction. The function is evaluated using several statistical tests, while obtained results demonstrate very admissible cryptographic proprieties such as confusion, diffusion capability and high sensitivity to input changes. Furthermore, the hashing scheme can be easily implemented through software or hardware, and provides very competitive running performances.

  6. Robust image hashing based on random Gabor filtering and dithered lattice vector quantization.

    Science.gov (United States)

    Li, Yuenan; Lu, Zheming; Zhu, Ce; Niu, Xiamu

    2012-04-01

    In this paper, we propose a robust-hash function based on random Gabor filtering and dithered lattice vector quantization (LVQ). In order to enhance the robustness against rotation manipulations, the conventional Gabor filter is adapted to be rotation invariant, and the rotation-invariant filter is randomized to facilitate secure feature extraction. Particularly, a novel dithered-LVQ-based quantization scheme is proposed for robust hashing. The dithered-LVQ-based quantization scheme is well suited for robust hashing with several desirable features, including better tradeoff between robustness and discrimination, higher randomness, and secrecy, which are validated by analytical and experimental results. The performance of the proposed hashing algorithm is evaluated over a test image database under various content-preserving manipulations. The proposed hashing algorithm shows superior robustness and discrimination performance compared with other state-of-the-art algorithms, particularly in the robustness against rotations (of large degrees).

  7. All-optical hash code generation and verification for low latency communications.

    Science.gov (United States)

    Paquot, Yvan; Schröder, Jochen; Pelusi, Mark D; Eggleton, Benjamin J

    2013-10-07

    We introduce an all-optical, format transparent hash code generator and a hash comparator for data packets verification with low latency at high baudrate. The device is reconfigurable and able to generate hash codes based on arbitrary functions and perform the comparison directly in the optical domain. Hash codes are calculated with custom interferometric circuits implemented with a Fourier domain optical processor. A novel nonlinear scheme featuring multiple four-wave mixing processes in a single waveguide is implemented for simultaneous phase and amplitude comparison of the hash codes before and after transmission. We demonstrate the technique with single polarisation BPSK and QPSK signals up to a data rate of 80 Gb/s.

  8. b-Bit Minwise Hashing for Large-Scale Linear SVM

    CERN Document Server

    Li, Ping; Konig, Christian

    2011-01-01

    In this paper, we propose to (seamlessly) integrate b-bit minwise hashing with linear SVM to substantially improve the training (and testing) efficiency using much smaller memory, with essentially no loss of accuracy. Theoretically, we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit minwise hashing matrix are all positive definite matrices (kernels). Interestingly, our proof for the positive definiteness of the b-bit minwise hashing kernel naturally suggests a simple strategy to integrate b-bit hashing with linear SVM. Our technique is particularly useful when the data can not fit in memory, which is an increasingly critical issue in large-scale machine learning. Our preliminary experimental results on a publicly available webspam dataset (350K samples and 16 million dimensions) verified the effectiveness of our algorithm. For example, the training time was reduced to merely a few seconds. In addition, our technique can be easily extended to many other linear and nonlinear machine...

  9. A novel method for one-way hash function construction based on spatiotemporal chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren Haijun [College of Software Engineering, Chongqing University, Chongqing 400044 (China); State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China)], E-mail: jhren@cqu.edu.cn; Wang Yong; Xie Qing [Key Laboratory of Electronic Commerce and Logistics of Chongqing, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Yang Huaqian [Department of Computer and Modern Education Technology, Chongqing Education of College, Chongqing 400067 (China)

    2009-11-30

    A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.

  10. Option-implied measures of equity risk

    DEFF Research Database (Denmark)

    Chang, Bo-Young; Christoffersen, Peter; Vainberg, Gregory;

    2012-01-01

    Equity risk measured by beta is of great interest to both academics and practitioners. Existing estimates of beta use historical returns. Many studies have found option-implied volatility to be a strong predictor of future realized volatility. We find that option-implied volatility and skewness a...... able to reflect sudden changes in the structure of the underlying company....

  11. Scalable partitioning and exploration of chemical spaces using geometric hashing.

    Science.gov (United States)

    Dutta, Debojyoti; Guha, Rajarshi; Jurs, Peter C; Chen, Ting

    2006-01-01

    Virtual screening (VS) has become a preferred tool to augment high-throughput screening(1) and determine new leads in the drug discovery process. The core of a VS informatics pipeline includes several data mining algorithms that work on huge databases of chemical compounds containing millions of molecular structures and their associated data. Thus, scaling traditional applications such as classification, partitioning, and outlier detection for huge chemical data sets without a significant loss in accuracy is very important. In this paper, we introduce a data mining framework built on top of a recently developed fast approximate nearest-neighbor-finding algorithm(2) called locality-sensitive hashing (LSH) that can be used to mine huge chemical spaces in a scalable fashion using very modest computational resources. The core LSH algorithm hashes chemical descriptors so that points close to each other in the descriptor space are also close to each other in the hashed space. Using this data structure, one can perform approximate nearest-neighbor searches very quickly, in sublinear time. We validate the accuracy and performance of our framework on three real data sets of sizes ranging from 4337 to 249 071 molecules. Results indicate that the identification of nearest neighbors using the LSH algorithm is at least 2 orders of magnitude faster than the traditional k-nearest-neighbor method and is over 94% accurate for most query parameters. Furthermore, when viewed as a data-partitioning procedure, the LSH algorithm lends itself to easy parallelization of nearest-neighbor classification or regression. We also apply our framework to detect outlying (diverse) compounds in a given chemical space; this algorithm is extremely rapid in determining whether a compound is located in a sparse region of chemical space or not, and it is quite accurate when compared to results obtained using principal-component-analysis-based heuristics.

  12. Connected Bit Minwise Hashing%连接位Minwise Hash算法的研究

    Institute of Scientific and Technical Information of China (English)

    袁鑫攀; 龙军; 张祖平; 罗跃逸; 张昊; 桂卫华

    2013-01-01

    Minwise Hashing has become a standard technique for estimating the similarity of the collection (e.g. , resemblance) with applications in information retrieval. While traditional Minwise hashing methods store each hashed value using 64 bits, storing only the lowest b bits of each (Minwise) hashed value (e. g. , b=\\ or 2). The 6-bit Minwise hashing algorithm can gain substantial advantages in terms of computational efficiency and storage space. Based on the 6-bit Minwise hashing theory, a connected bit Minwise hashing algorithm is proposed. The unbiased estimator of the resemblance and storage factor of connected bit Minwise hashing are provided theoretically. It could be theoretically proved that the efficiency of similarity estimation is improved by the connected bit Minwise hashing algorithm since the number of comparisons is greatly reduced without significant loss of accuracy. Several key parameters (e. g. , precision, recall and efficiency) are analyzed, and the availability of several estimators for connected bit Minwise hashing is analyzed. Theoretical analysis and experimental results demonstrate the effectiveness of this method.%在信息检索中,Minwise Hash算法用于估计集合的相似度.b位Minwise Hash则通过存储Hash值的b位来估计相似度,从而节省了存储空间和计算时间.基于b位Minwise Hash的理论框架提出了连接位Minwise Hash算法,给出了连接位的相似度无偏估计和存储因子.通过理论证明了连接位Minwisc Hash算法不需要损失很大的精度却可以成倍地减少比对的次数,提升了算法的性能.理论分析和实验验证了此方法的有效性.

  13. Pseudorandom Numbers and Hash Functions from Iterations of Multivariate Polynomials

    CERN Document Server

    Ostafe, Alina

    2009-01-01

    Dynamical systems generated by iterations of multivariate polynomials with slow degree growth have proved to admit good estimates of exponential sums along their orbits which in turn lead to rather stronger bounds on the discrepancy for pseudorandom vectors generated by these iterations. Here we add new arguments to our original approach and also extend some of our recent constructions and results to more general orbits of polynomial iterations which may involve distinct polynomials as well. Using this construction we design a new class of hash functions from iterations of polynomials and use our estimates to motivate their "mixing" properties.

  14. Enhanced and Fast Face Recognition by Hashing Algorithm

    Directory of Open Access Journals (Sweden)

    M. Sharif

    2012-08-01

    Full Text Available This paper presents a face hashing technique for fast face recognition. The proposed technique employs the twoexisting algorithms, i.e., 2-D discrete cosine transformation and K-means clustering. The image has to go throughdifferent pre-processing phases and the two above-mentioned algorithms must be used in order to obtain the hashvalue of the face image. The searching process is increased by introducing a modified form of binary search. A newdatabase architecture called Facebases has also been introduced to further speedup the searching process.

  15. Efficient Secured Hash Based Password Authentication in Multiple Websites

    Directory of Open Access Journals (Sweden)

    T.S.Thangavel,

    2010-08-01

    Full Text Available The most commercial web sites rely on a relatively weak form of password authentication, the browser simply sends a user’s plaintext password to a remote web server, often using secure socket layer. Even when used over an encrypted connection, this form of password authentication is vulnerable to attack. In common password attacks, hackers exploit the fact that web users often use the same password at many different sites. This allows hackers to break into a low security site that simply stores username/passwords in the clear and use the retrieved passwords at a high security site. This work developed an improved secure hash function, whose security is directly related to the syndrome decoding problem from the theory of error-correcting codes. The proposal design and develop a user interface, and implementation of a browser extension, password hash, that strengthens web password authentication. Providing customized passwords, can reduce the threat of password attacks with no server changes and little or no change to the user experience. The proposed techniques are designed to transparently provide novice users with thebenefits of password practices that are otherwise only feasible for security experts. Experimentation are done with Internet Explorer and Fire fox implementations.

  16. Scalable prediction of compound-protein interactions using minwise hashing.

    Science.gov (United States)

    Tabei, Yasuo; Yamanishi, Yoshihiro

    2013-01-01

    The identification of compound-protein interactions plays key roles in the drug development toward discovery of new drug leads and new therapeutic protein targets. There is therefore a strong incentive to develop new efficient methods for predicting compound-protein interactions on a genome-wide scale. In this paper we develop a novel chemogenomic method to make a scalable prediction of compound-protein interactions from heterogeneous biological data using minwise hashing. The proposed method mainly consists of two steps: 1) construction of new compact fingerprints for compound-protein pairs by an improved minwise hashing algorithm, and 2) application of a sparsity-induced classifier to the compact fingerprints. We test the proposed method on its ability to make a large-scale prediction of compound-protein interactions from compound substructure fingerprints and protein domain fingerprints, and show superior performance of the proposed method compared with the previous chemogenomic methods in terms of prediction accuracy, computational efficiency, and interpretability of the predictive model. All the previously developed methods are not computationally feasible for the full dataset consisting of about 200 millions of compound-protein pairs. The proposed method is expected to be useful for virtual screening of a huge number of compounds against many protein targets.

  17. Similarity Search and Locality Sensitive Hashing using TCAMs

    CERN Document Server

    Shinde, Rajendra; Gupta, Pankaj; Dutta, Debojyoti

    2010-01-01

    Similarity search methods are widely used as kernels in various machine learning applications. Nearest neighbor search (NNS) algorithms are often used to retrieve similar entries, given a query. While there exist efficient techniques for exact query lookup using hashing, similarity search using exact nearest neighbors is known to be a hard problem and in high dimensions, best known solutions offer little improvement over a linear scan. Fast solutions to the approximate NNS problem include Locality Sensitive Hashing (LSH) based techniques, which need storage polynomial in $n$ with exponent greater than $1$, and query time sublinear, but still polynomial in $n$, where $n$ is the size of the database. In this work we present a new technique of solving the approximate NNS problem in Euclidean space using a Ternary Content Addressable Memory (TCAM), which needs near linear space and has O(1) query time. In fact, this method also works around the best known lower bounds in the cell probe model for the query time us...

  18. Sequential Compact Code Learning for Unsupervised Image Hashing.

    Science.gov (United States)

    Liu, Li; Shao, Ling

    2016-12-01

    Effective hashing for large-scale image databases is a popular research area, attracting much attention in computer vision and visual information retrieval. Several recent methods attempt to learn either graph embedding or semantic coding for fast and accurate applications. In this paper, a novel unsupervised framework, termed evolutionary compact embedding (ECE), is introduced to automatically learn the task-specific binary hash codes. It can be regarded as an optimization algorithm that combines the genetic programming (GP) and a boosting trick. In our architecture, each bit of ECE is iteratively computed using a weak binary classification function, which is generated through GP evolving by jointly minimizing its empirical risk with the AdaBoost strategy on a training set. We address this as greedy optimization by embedding high-dimensional data points into a similarity-preserved Hamming space with a low dimension. We systematically evaluate ECE on two data sets, SIFT 1M and GIST 1M, showing the effectiveness and the accuracy of our method for a large-scale similarity search.

  19. MapReduce Parallel Cuckoo Hashing and Oblivious RAM Simulations

    CERN Document Server

    Goodrich, Michael T

    2010-01-01

    We present an efficient algorithm for performing cuckoo hashing in the MapReduce parallel model of computation and we show how this result in turn leads to improved methods for performing data-oblivious RAM simulations. Our contributions involve a number of seemingly unrelated new results, including: a parallel MapReduce cuckoo hashing algorithm that runs in O(log n) time and uses O(n) total work, with very high probability a reduction of data-oblivious simulation of sparse-streaming MapReduce algorithms to oblivious sorting an external-memory data-oblivious sorting algorithm using O((N/B) log^2_(M/B) (N/B)) I/Os constant-memory data-oblivious RAM simulation with O(log^2 n) amortized time overhead, with very high probability, or with expected O(log2 n) amortized time overhead and better constant factors sublinear-memory data-oblivious RAM simulation with O(n^nu) private memory and O(log n) amortized time overhead, with very high probability, for constant nu > 0. This last result is, in fact, the main result o...

  20. Content-based image hashing using wave atoms

    Institute of Scientific and Technical Information of China (English)

    Liu Fang; Leung Hon-Yin; Cheng Lee-Ming; Ji Xiao-Yong

    2012-01-01

    It is well known that robustness,fragility,and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge.In this paper,a content-based image hashing scheme using wave atoms is proposed,which satisfies the above criteria.Compared with traditional transforms like wavelet transform and discrete cosine transform (DCT),wave atom transform is adopted for the sparser expansion and better characteristics of texture feature extraction which shows better performance in both robustness and fragility.In addition,multi-frequency detection is presented to provide an application-defined trade-off.To ensure the security of the proposed approach and its resistance to a chosen-plaintext attack,a randomized pixel modulation based on the Rényi chaotic map is employed,combining with the nonliner wave atom transform.The experimental results reveal that the proposed scheme is robust against content-preserving manipulations and has a good discriminative capability to malicious tampering.

  1. ANALISA FUNGSI HASH DALAM ENKRIPSI IDEA UNTUK KEAMANAN RECORD INFORMASI

    Directory of Open Access Journals (Sweden)

    Ramen Antonov Purba

    2014-02-01

    Full Text Available Issues of security and confidentiality of data is very important to organization or individual. If the data in a network of computers connected with a public network such as the Internet. Of course a very important data is viewed or hijacked by unauthorized persons. Because if this happens we will probably corrupted data can be lost even that will cause huge material losses. This research discusses the security system of sending messages/data using the encryption aims to maintain access of security a message from the people who are not authorized/ eligible. Because of this delivery system is very extensive security with the scope then this section is limited only parsing the IDEA Algorithm with hash functions, which include encryption, decryption. By combining the encryption IDEA methods (International Data Encryption Algorithm to encrypt the contents of the messages/data with the hash function to detect changes the content of messages/data is expected security level to be better. Results from this study a software that can perform encryption and decryption of messages/data, generate the security key based on the message/data is encrypted.

  2. Implied terms in English and Romanian law

    Directory of Open Access Journals (Sweden)

    Stefan Dinu

    2015-12-01

    Full Text Available This study analyses the matter of implied terms from the point of view of both English and Romanian law. First, the introductory section provides a brief overview of implied terms, by defining this class of contractual clauses and by providing their general features. Second, the English law position is analysed, where it is generally recognised that a term may be implied in one of three manners, which are described in turn. An emp hasis is made on the Privy Council’s decision in Attorney General of Belize v Belize Telecom Ltd and its impact. Third, the Romanian law position is described, the starting point of the discussion being represented by the provisions of Article 1272 of the 2009 Civil Code. Fourth, the study ends by mentioning some points of comparison between the two legal systems in what concerns the approach towards implied terms.

  3. An Ad Hoc Adaptive Hashing Technique forNon-Uniformly Distributed IP Address Lookup in Computer Networks

    Directory of Open Access Journals (Sweden)

    Christopher Martinez

    2007-02-01

    Full Text Available Hashing algorithms long have been widely adopted to design a fast address look-up process which involves a search through a large database to find a record associated with a given key. Hashing algorithms involve transforming a key inside each target data to a hash value hoping that the hashing would render the database a uniform distribution with respect to this new hash value. The close the final distribution is to uniform, the less search time would be required when a query is made. When the database is already key-wise uniformly distributed, any regular hashing algorithm, such as bit-extraction, bit-group XOR, etc., would easily lead to a statistically perfect uniform distribution after the hashing. On the other hand, if records in the database are instead not uniformly distributed as in almost all known practical applications, then even different regular hash functions would lead to very different performance. When the target database has a key with a highly skewed distributed value, performance delivered by regular hashing algorithms usually becomes far from desirable. This paper aims at designing a hashing algorithm to achieve the highest probability in leading to a uniformly distributed hash result from a non-uniformly distributed database. An analytical pre-process on the original database is first performed to extract critical information that would significantly benefit the design of a better hashing algorithm. This process includes sorting on the bits of the key to prioritize the use of them in the XOR hashing sequence, or in simple bit extraction, or even a combination of both. Such an ad hoc hash design is critical to adapting to all real-time situations when there exists a changing (and/or expanding database with an irregular non-uniform distribution. Significant improvement from simulation results is obtained in randomly generated data as well as real data.

  4. Weighted Hashing with Multiple Cues for Cell-Level Analysis of Histopathological Images.

    Science.gov (United States)

    Zhang, Xiaofan; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-01-01

    Recently, content-based image retrieval has been investigated for histopathological image analysis, focusing on improving the accuracy and scalability. The main motivation is to interpret a new image (i.e., query image) by searching among a potentially large-scale database of training images in real-time. Hashing methods have been employed because of their promising performance. However, most previous works apply hashing algorithms on the whole images, while the important information of histopathological images usually lies in individual cells. In addition, they usually only hash one type of features, even though it is often necessary to inspect multiple cues of cells. Therefore, we propose a probabilistic-based hashing framework to model multiple cues of cells for accurate analysis of histopathological images. Specifically, each cue of a cell is compressed as binary codes by kernelized and supervised hashing, and the importance of each hash entry is determined adaptively according to its discriminativity, which can be represented as probability scores. Given these scores, we also propose several feature fusion and selection schemes to integrate their strengths. The classification of the whole image is conducted by aggregating the results from multiple cues of all cells. We apply our algorithm on differentiating adenocarcinoma and squamous carcinoma, i.e., two types of lung cancers, using a large dataset containing thousands of lung microscopic tissue images. It achieves 90.3% accuracy by hashing and retrieving multiple cues of half-million cells.

  5. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    Directory of Open Access Journals (Sweden)

    Rajeswari Sridhar

    2010-07-01

    Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser time complexity than dual-ternary based indexing The algorithms were also compared for their precision and recall in which multi-key hashing had a better recall than modified dual ternary indexing for the sample data considered.

  6. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    Directory of Open Access Journals (Sweden)

    Rajeswari Sridhar

    2010-07-01

    Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieveCarnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithmfor music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. Themodification in the dual ternary algorithm was essential to handle variable length query phrase and toaccommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted forCarnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternaryalgorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval inwhich features like MFCC, spectral flux, melody string and spectral centroid are used as features forindexing data into a hash table. The way in which collision resolution was handled by this hash table isdifferent than the normal hash table approaches. It was observed that multi-key hashing based retrievalhad a lesser time complexity than dual-ternary based indexing The algorithms were also compared fortheir precision and recall in which multi-key hashing had a better recall than modified dual ternaryindexing for the sample data considered.

  7. A secure and efficient cryptographic hash function based on NewFORK-256

    Directory of Open Access Journals (Sweden)

    Harshvardhan Tiwari

    2012-11-01

    Full Text Available Cryptographic hash functions serve as a fundamental building block of information security and are used in numerous security applications and protocols such as digital signature schemes, construction of MAC and random number generation, for ensuring data integrity and data origin authentication. Researchers have noticed serious security flaws and vulnerabilities in most widely used MD and SHA family hash functions. As a result hash functions from FORK family with longer digest value were considered as good alternatives for MD5 and SHA-1, but recent attacks against these hash functions have highlighted their weaknesses. In this paper we propose a dedicated hash function MNF-256 based on the design principle of NewFORK-256. It takes 512 bit message blocks and generates 256 bit hash value. A random sequence is added as an additional input to the compression function of MNF-256. Three branch parallel structure and secure compression function make MNF-256 an efficient, fast and secure hash function. Various simulation results indicate that MNF-256 is immune to common cryptanalytic attacks and faster than NewFORK-256.

  8. An Extended Image Hashing Concept: Content-Based Fingerprinting Using FJLT

    Directory of Open Access Journals (Sweden)

    Xudong Lv

    2009-01-01

    Full Text Available Dimension reduction techniques, such as singular value decomposition (SVD and nonnegative matrix factorization (NMF, have been successfully applied in image hashing by retaining the essential features of the original image matrix. However, a concern of great importance in image hashing is that no single solution is optimal and robust against all types of attacks. The contribution of this paper is threefold. First, we introduce a recently proposed dimension reduction technique, referred as Fast Johnson-Lindenstrauss Transform (FJLT, and propose the use of FJLT for image hashing. FJLT shares the low distortion characteristics of a random projection, but requires much lower computational complexity. Secondly, we incorporate Fourier-Mellin transform into FJLT hashing to improve its performance under rotation attacks. Thirdly, we propose a new concept, namely, content-based fingerprint, as an extension of image hashing by combining different hashes. Such a combined approach is capable of tackling all types of attacks and thus can yield a better overall performance in multimedia identification. To demonstrate the superior performance of the proposed schemes, receiver operating characteristics analysis over a large image database and a large class of distortions is performed and compared with the state-of-the-art image hashing using NMF.

  9. Quicksort, largest bucket, and min-wise hashing with limited independence

    DEFF Research Database (Denmark)

    Knudsen, Mathias Bæk Tejs; Stöckel, Morten

    2015-01-01

    Randomized algorithms and data structures are often analyzed under the assumption of access to a perfect source of randomness. The most fundamental metric used to measure how “random” a hash function or a random number generator is, is its independence: a sequence of random variables is said...... being more practical. We provide new bounds for randomized quicksort, min-wise hashing and largest bucket size under limited independence. Our results can be summarized as follows. Randomized Quicksort. When pivot elements are computed using a 5-independent hash function, Karloff and Raghavan, J.ACM’93...

  10. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R.

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  11. Collision-Free Path Planning for Manipulator Based on the Grid%基于网格法的机械臂无碰撞轨迹规划

    Institute of Scientific and Technical Information of China (English)

    鲁守银; 韩佳林

    2014-01-01

    This article describes a method of high voltage operation mechanical arm trajectory planning of collision-free in three-dimensional space. By mesh partition the whole workspace, a complete description of the free space and the obstructions space is made. And each grid is corresponding to an index. Then in the search path which used A*algorithm can be quickly and accurately get the best collision-free path and reduced the running time of the system.%介绍了高压带电作业机械臂在三维空间中进行无碰撞轨迹规划的一种方法,通过对整个作业空间进行网格化划分,完整的描述出自由空间与障碍物空间,并对每一个网格进行索引对应,使其在利用 A*算法搜索路径是可以迅速准确的得到最优的无碰撞路径,减少系统的运行时间。

  12. Frame Interpolation Based on Visual Correspondence and Coherency Sensitive Hashing

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2013-01-01

    Full Text Available The technology of frame interpolation can be applied in intelligent monitoring systems to improve the quality of surveillance video. In this paper, a region-guided frame interpolation algorithm is proposed by introducing two innovative improvements. On the one hand, a detection approach is presented based on visual correspondence for detecting the motion regions that correspond to attracted objects in video sequences, which can narrow the prediction range of interpolated frames. On the other hand, spatial and temporal mapping rules are proposed using coherency sensitive hashing, which can obtain more accurate predicted values of interpolated pixels. Experiments show that the proposed method can achieve encouraging performance in terms of visual quality and quantitative measures.

  13. Multiple structural alignment and core detection by geometric hashing.

    Science.gov (United States)

    Leibowitz, N; Fligelman, Z Y; Nussinov, R; Wolfson, H J

    1999-01-01

    A Multiple Structural Alignment algorithm is presented. The algorithm accepts an ensemble of protein structures and finds the largest substructure (core) of C alpha atoms whose geometric configuration appear in all the molecules of the ensemble (core). Both the detection of this core and the resulting structural alignment are done simultaneously. Other large enough multistructural superimpositions are detected as well. Our method is based on the Geometric Hashing paradigm and a superimposition clustering technique which represents superimpositions by sets of matching atoms. The algorithm proved to be efficient on real data in a series of experiments. The same method can be applied to any ensemble of molecules (not necessarily proteins) since our basic technique is sequence order independent.

  14. Planetary Nebula Candidates Uncovered with the HASH Research Platform

    CERN Document Server

    Fragkou, Vasiliki; Frew, David; Parker, Quentin

    2016-01-01

    A detailed examination of new high quality radio catalogues (e.g. Cornish) in combination with available mid-infrared (MIR) satellite imagery (e.g. Glimpse) has allowed us to find 70 new planetary nebula (PN) candidates based on existing knowledge of their typical colors and fluxes. To further examine the nature of these sources, multiple diagnostic tools have been applied to these candidates based on published data and on available imagery in the HASH (Hong Kong/ AAO/ Strasbourg H{\\alpha} planetary nebula) research platform. Some candidates have previously-missed optical counterparts allowing for spectroscopic follow-up. Indeed, the single object spectroscopically observed so far has turned out to be a bona fide PN.

  15. Encrypted data inquiries using chained perfect hashing (CPH)

    Science.gov (United States)

    Kaabneh, Khalid; Tarawneh, Hassan; Alhadid, Issam

    2017-09-01

    Cryptography is the practice of transforming data to indecipherable by a third party, unless a particular piece of secret information is made available to them. Data encryption has been paid a great attention to protect data. As data sizes are growing, so does the need for efficient data search while being encrypted to protect it during transmission and storage. This research is based on our previous and continuous work to speed up and enhance global heuristic search on an encrypted data. This research is using chained hashing approach to reduce the search time and decrease the collision rate which most search techniques suffers from. The results were very encouraging and will be discussed in the experimental results section.

  16. Improved Collision Search for Hash Functions: New Advanced Message Modification

    Science.gov (United States)

    Naito, Yusuke; Ohta, Kazuo; Kunihiro, Noboru

    In this paper, we discuss the collision search for hash functions, mainly in terms of their advanced message modification. The advanced message modification is a collision search tool based on Wang et al.'s attacks. Two advanced message modifications have previously been proposed: cancel modification for MD4 and MD5, and propagation modification for SHA-0. In this paper, we propose a new concept of advanced message modification, submarine modification. As a concrete example combining the ideas underlying these modifications, we apply submarine modification to the collision search for SHA-0. As a result, we show that this can reduce the collision search attack complexity from 239 to 236 SHA-0 compression operations.

  17. Construction of secure and fast hash functions using nonbinary error-correcting codes

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Preneel, Bart

    2002-01-01

    This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with nl-bit outputs for integers n>1 based on error-correcting codes and secure compression functions with l-bit outputs. This leads to simple and practical hash function construct......, some new attacks are presented that essentially match the presented lower bounds. The constructions allow for a large degree of internal parallelism. The limits of this approach are studied in relation to bounds derived in coding theory.......This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with nl-bit outputs for integers n>1 based on error-correcting codes and secure compression functions with l-bit outputs. This leads to simple and practical hash function...

  18. A Novel Block-DCT and PCA Based Image Perceptual Hashing Algorithm

    Directory of Open Access Journals (Sweden)

    Zeng Jie

    2013-01-01

    Full Text Available Image perceptual hashing finds applications in content indexing, large-scale image database management, certification and authentication and digital watermarking. We propose a Block-DCT and PCA based image perceptual hash in this article and explore the algorithm in the application of tamper detection. The main idea of the algorithm is to integrate color histogram and DCT coefficients of image blocks as perceptual feature, then to compress perceptual features as inter-feature with PCA, and to threshold to create a robust hash. The robustness and discrimination properties of the proposed algorithm are evaluated in detail. Experimental results show that the proposed image perceptual hash algorithm can effectively address the tamper detection problem with advantageous robustness and discrimination.

  19. Fast image search with locality-sensitive hashing and homogeneous kernels map.

    Science.gov (United States)

    Li, Jun-yi; Li, Jian-hua

    2015-01-01

    Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach's linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH) algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.

  20. Fast Image Search with Locality-Sensitive Hashing and Homogeneous Kernels Map

    Directory of Open Access Journals (Sweden)

    Jun-yi Li

    2015-01-01

    Full Text Available Fast image search with efficient additive kernels and kernel locality-sensitive hashing has been proposed. As to hold the kernel functions, recent work has probed methods to create locality-sensitive hashing, which guarantee our approach’s linear time; however existing methods still do not solve the problem of locality-sensitive hashing (LSH algorithm and indirectly sacrifice the loss in accuracy of search results in order to allow fast queries. To improve the search accuracy, we show how to apply explicit feature maps into the homogeneous kernels, which help in feature transformation and combine it with kernel locality-sensitive hashing. We prove our method on several large datasets and illustrate that it improves the accuracy relative to commonly used methods and make the task of object classification and, content-based retrieval more fast and accurate.

  1. Study on An Absolute Non-Collision Hash and Jumping Table IP Classification Algorithms

    Institute of Scientific and Technical Information of China (English)

    SHANG Feng-jun; PAN Ying-jun

    2004-01-01

    In order to classify packet, we propose a novel IP classification based the non-collision hash and jumping table Trie-tree (NHJTTT) algorithm, which is based on non-collision hash Trie-tree and Lakshman and Stiliadis proposing a 2-dimensional classification algorithm (LS algorithm).The core of algorithm consists of two parts: structure the non-collision hash function, which is constructed mainly based on destination /source port and protocol type field so that the hash function can avoid space explosion problem; introduce jumping table Trie-tree based LS algorithm in order to reduce time complexity.The test results show that the classification rate of NHJTTT algorithm is up to 1 million packets per second and the maximum memory consumed is 9 MB for 10 000 rules.

  2. HashDist: Reproducible, Relocatable, Customizable, Cross-Platform Software Stacks for Open Hydrological Science

    Science.gov (United States)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Developing scientific software is a continuous balance between not reinventing the wheel and getting fragile codes to interoperate with one another. Binary software distributions such as Anaconda provide a robust starting point for many scientific software packages, but this solution alone is insufficient for many scientific software developers. HashDist provides a critical component of the development workflow, enabling highly customizable, source-driven, and reproducible builds for scientific software stacks, available from both the IPython Notebook and the command line. To address these issues, the Coastal and Hydraulics Laboratory at the US Army Engineer Research and Development Center has funded the development of HashDist in collaboration with Simula Research Laboratories and the University of Texas at Austin. HashDist is motivated by a functional approach to package build management, and features intelligent caching of sources and builds, parametrized build specifications, and the ability to interoperate with system compilers and packages. HashDist enables the easy specification of "software stacks", which allow both the novice user to install a default environment and the advanced user to configure every aspect of their build in a modular fashion. As an advanced feature, HashDist builds can be made relocatable, allowing the easy redistribution of binaries on all three major operating systems as well as cloud, and supercomputing platforms. As a final benefit, all HashDist builds are reproducible, with a build hash specifying exactly how each component of the software stack was installed. This talk discusses the role of HashDist in the hydrological sciences, including its use by the Coastal and Hydraulics Laboratory in the development and deployment of the Proteus Toolkit as well as the Rapid Operational Access and Maneuver Support project. We demonstrate HashDist in action, and show how it can effectively support development, deployment, teaching, and

  3. Bin-Hash Indexing: A Parallel Method for Fast Query Processing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, Edward W; Gosink, Luke J.; Wu, Kesheng; Bethel, Edward Wes; Owens, John D.; Joy, Kenneth I.

    2008-06-27

    This paper presents a new parallel indexing data structure for answering queries. The index, called Bin-Hash, offers extremely high levels of concurrency, and is therefore well-suited for the emerging commodity of parallel processors, such as multi-cores, cell processors, and general purpose graphics processing units (GPU). The Bin-Hash approach first bins the base data, and then partitions and separately stores the values in each bin as a perfect spatial hash table. To answer a query, we first determine whether or not a record satisfies the query conditions based on the bin boundaries. For the bins with records that can not be resolved, we examine the spatial hash tables. The procedures for examining the bin numbers and the spatial hash tables offer the maximum possible level of concurrency; all records are able to be evaluated by our procedure independently in parallel. Additionally, our Bin-Hash procedures access much smaller amounts of data than similar parallel methods, such as the projection index. This smaller data footprint is critical for certain parallel processors, like GPUs, where memory resources are limited. To demonstrate the effectiveness of Bin-Hash, we implement it on a GPU using the data-parallel programming language CUDA. The concurrency offered by the Bin-Hash index allows us to fully utilize the GPU's massive parallelism in our work; over 12,000 records can be simultaneously evaluated at any one time. We show that our new query processing method is an order of magnitude faster than current state-of-the-art CPU-based indexing technologies. Additionally, we compare our performance to existing GPU-based projection index strategies.

  4. Practical security and privacy attacks against biometric hashing using sparse recovery

    Science.gov (United States)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  5. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    CERN Document Server

    Sridhar, Rajeswari; Karthiga, S; T, Geetha; 10.5121/ijaia.2010.1305

    2010-01-01

    In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser ...

  6. Fully Integrated Passive UHF RFID Tag for Hash-Based Mutual Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Shugo Mikami

    2015-01-01

    Full Text Available Passive radio-frequency identification (RFID tag has been used in many applications. While the RFID market is expected to grow, concerns about security and privacy of the RFID tag should be overcome for the future use. To overcome these issues, privacy-preserving authentication protocols based on cryptographic algorithms have been designed. However, to the best of our knowledge, evaluation of the whole tag, which includes an antenna, an analog front end, and a digital processing block, that runs authentication protocols has not been studied. In this paper, we present an implementation and evaluation of a fully integrated passive UHF RFID tag that runs a privacy-preserving mutual authentication protocol based on a hash function. We design a single chip including the analog front end and the digital processing block. We select a lightweight hash function supporting 80-bit security strength and a standard hash function supporting 128-bit security strength. We show that when the lightweight hash function is used, the tag completes the protocol with a reader-tag distance of 10 cm. Similarly, when the standard hash function is used, the tag completes the protocol with the distance of 8.5 cm. We discuss the impact of the peak power consumption of the tag on the distance of the tag due to the hash function.

  7. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX......-hash-then-sign signature scheme, one has to solve a cryptanalytical task which is related to finding second preimages for the hash function. In this article, we will show how to use Dean's method of finding expandable messages for finding a second preimage in the Merkle-Damg{\\aa}rd hash function to existentially forge...... a signature scheme based on a $t$-bit RMX-hash function which uses the Davies-Meyer compression functions (e.g., MD4, MD5, SHA family) in $2^{t/2}$ chosen messages plus $2^{t/2+1}$ off-line operations of the compression function and similar amount of memory. This forgery attack also works on the signature...

  8. Asset allocation using option-implied moments

    Science.gov (United States)

    Bahaludin, H.; Abdullah, M. H.; Tolos, S. M.

    2017-09-01

    This study uses an option-implied distribution as the input in asset allocation. The computation of risk-neutral densities (RND) are based on the Dow Jones Industrial Average (DJIA) index option and its constituents. Since the RNDs estimation does not incorporate risk premium, the conversion of RND into risk-world density (RWD) is required. The RWD is obtained through parametric calibration using the beta distributions. The mean, volatility, and covariance are then calculated to construct the portfolio. The performance of the portfolio is evaluated by using portfolio volatility and Sharpe ratio.

  9. b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning and Using GPUs for Fast Preprocessing with Simple Hash Functions

    CERN Document Server

    Li, Ping; Konig, Arnd Christian

    2012-01-01

    In this paper, we study several critical issues which must be tackled before one can apply b-bit minwise hashing to the volumes of data often used industrial applications, especially in the context of search. 1. (b-bit) Minwise hashing requires an expensive preprocessing step that computes k (e.g., 500) minimal values after applying the corresponding permutations for each data vector. We developed a parallelization scheme using GPUs and observed that the preprocessing time can be reduced by a factor of 20-80 and becomes substantially smaller than the data loading time. 2. One major advantage of b-bit minwise hashing is that it can substantially reduce the amount of memory required for batch learning. However, as online algorithms become increasingly popular for large-scale learning in the context of search, it is not clear if b-bit minwise yields significant improvements for them. This paper demonstrates that $b$-bit minwise hashing provides an effective data size/dimension reduction scheme and hence it can d...

  10. Improved Collision Attack on Hash Function MD5

    Institute of Scientific and Technical Information of China (English)

    Jie Liang; Xue-Jia Lai

    2007-01-01

    In this paper, we present a fast attack algorithm to find two-block collision of hash function MD5.The algorithm is based on the two-block collision differential path of MD5 that was presented by Wang et al.In the Conference EUROCRYPT 2005.We found that the derived conditions for the desired collision differential path were not sufficient to guarantee the path to hold and that some conditions could be modified to enlarge the collision set.By using technique of small range searching and omitting the computing steps to check the characteristics in the attack algorithm, we can speed up the attack of MD5 efficiently.Compared with the Advanced Message Modification technique presented by Wang et al.,the small range searching technique can correct 4 more conditions for the first iteration differential and 3 more conditions for the second iteration differential, thus improving the probability and the complexity to find collisions.The whole attack on the MD5 can be accomplished within 5 hours using a PC with Pentium4 1.70GHz CPU.

  11. Hash-chain-based authentication for IoT

    Directory of Open Access Journals (Sweden)

    Antonio PINTO

    2016-12-01

    Full Text Available The number of everyday interconnected devices continues to increase and constitute the Internet of Things (IoT. Things are small computers equipped with sensors and wireless communications capabilities that are driven by energy constraints, since they use batteries and may be required to operate over long periods of time. The majority of these devices perform data collection. The collected data is stored on-line using web-services that, sometimes, operate without any special considerations regarding security and privacy. The current work proposes a modified hash-chain authentication mechanism that, with the help of a smartphone, can authenticate each interaction of the devices with a REST web-service using One Time Passwords (OTP while using open wireless networks. Moreover, the proposed authentication mechanism adheres to the stateless, HTTP-like behavior expected of REST web-services, even allowing the caching of server authentication replies within a predefined time window. No other known web-service authentication mechanism operates in such manner.

  12. Perceptual hashing of sheet music based on graphical representation

    Science.gov (United States)

    Kremser, Gert; Schmucker, Martin

    2006-02-01

    For the protection of Intellectual Property Rights (IPR), different passive protection methods have been developed. These watermarking and fingerprinting technologies protect content beyond access control and thus tracing illegal distributions as well as the identification of people who are responsible for a illegal distribution is possible. The public's attention was attracted especially to the second application by the illegal distribution of the so called 'Hollywood screeners'. The focus of current research is on audio and video content and images. These are the common content types we are faced with every day, and which mostly have a huge commercial value. Especially the illegal distribution of content that has not been officially published shows the potential commercial impact of illegal distributions. Content types, however, are not limited to audio, video and images. There is a range of other content types, which also deserve the development of passive protection technologies. For sheet music for instance, different watermarking technologies have been developed, which up to this point only function within certain limitations. This is the reason why we wanted to find out how to develop a fingerprinting or perceptual hashing method for sheet music. In this article, we describe the development of our algorithm for sheet music, which is based on simple graphical features. We describe the selection of these features and the subsequent processing steps. The resulting compact representation is analyzed and the first performance results are reported.

  13. Recent development of perceptual image hashing%稳健图像Hash研究进展

    Institute of Scientific and Technical Information of China (English)

    王朔中; 张新鹏

    2007-01-01

    The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the academia as well as the industry. On the other hand, efficient search of the huge amount of images has become a great challenge. Image hashing is a technique suitable for use in image authentication and content based image retrieval (CBIR). In this article,we review some representative image hashing techniques proposed in the recent years, with emphases on how to meet the conflicting requirements of perceptual robustness and security. Following a brief introduction to some earlier methods, we focus on a typical two-stage structure and some geometric-distortion resilient techniques. We then introduce two image hashing approaches developed in our own research, and reveal security problems in some existing methods due to the absence of secret keys in certain stage of the image feature extraction, or availability of a large quantity of images, keys, or the hash function to the adversary. More research efforts are needed in developing truly robust and secure image hashing techniques.

  14. Clustering Web Documents based on Efficient Multi-Tire Hashing Algorithm for Mining Frequent Termsets

    Directory of Open Access Journals (Sweden)

    Noha Negm

    2013-06-01

    Full Text Available Document Clustering is one of the main themes in text mining. It refers to the process of grouping documents with similar contents or topics into clusters to improve both availability and reliability of text mining applications. Some of the recent algorithms address the problem of high dimensionality of the text by using frequent termsets for clustering. Although the drawbacks of the Apriori algorithm, it still the basic algorithm for mining frequent termsets. This paper presents an approach for Clustering Web Documents based on Hashing algorithm for mining Frequent Termsets (CWDHFT. It introduces an efficient Multi-Tire Hashing algorithm for mining Frequent Termsets (MTHFT instead of Apriori algorithm. The algorithm uses new methodology for generating frequent termsets by building the multi-tire hash table during the scanning process of documents only one time. To avoid hash collision, Multi Tire technique is utilized in this proposed hashing algorithm. Based on the generated frequent termset the documents are partitioned and the clustering occurs by grouping the partitions through the descriptive keywords. By using MTHFT algorithm, the scanning cost and computational cost is improved moreover the performance is considerably increased and increase up the clustering process. The CWDHFT approach improved accuracy, scalability and efficiency when compared with existing clustering algorithms like Bisecting K-means and FIHC.

  15. Biometric hashing for handwriting: entropy-based feature selection and semantic fusion

    Science.gov (United States)

    Scheidat, Tobias; Vielhauer, Claus

    2008-02-01

    Some biometric algorithms lack of the problem of using a great number of features, which were extracted from the raw data. This often results in feature vectors of high dimensionality and thus high computational complexity. However, in many cases subsets of features do not contribute or with only little impact to the correct classification of biometric algorithms. The process of choosing more discriminative features from a given set is commonly referred to as feature selection. In this paper we present a study on feature selection for an existing biometric hash generation algorithm for the handwriting modality, which is based on the strategy of entropy analysis of single components of biometric hash vectors, in order to identify and suppress elements carrying little information. To evaluate the impact of our feature selection scheme to the authentication performance of our biometric algorithm, we present an experimental study based on data of 86 users. Besides discussing common biometric error rates such as Equal Error Rates, we suggest a novel measurement to determine the reproduction rate probability for biometric hashes. Our experiments show that, while the feature set size may be significantly reduced by 45% using our scheme, there are marginal changes both in the results of a verification process as well as in the reproducibility of biometric hashes. Since multi-biometrics is a recent topic, we additionally carry out a first study on a pair wise multi-semantic fusion based on reduced hashes and analyze it by the introduced reproducibility measure.

  16. Improved locality-sensitive hashing method for the approximate nearest neighbor problem

    Science.gov (United States)

    Lu, Ying-Hua; Ma, Ting-Huai; Zhong, Shui-Ming; Cao, Jie; Wang, Xin; Abdullah, Al-Dhelaan

    2014-08-01

    In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.

  17. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  18. Internal differential collision attacks on the reduced-round Grøstl-0 hash function

    DEFF Research Database (Denmark)

    Ideguchi, Kota; Tischhauser, Elmar Wolfgang; Preneel, Bart

    2014-01-01

    . This results in collision attacks and semi-free-start collision attacks on the Grøstl-0 hash function and compression function with reduced rounds. Specifically, we show collision attacks on the Grøstl-0-256 hash function reduced to 5 and 6 out of 10 rounds with time complexities 248 and 2112 and on the Grøstl......-0-512 hash function reduced to 6 out of 14 rounds with time complexity 2183. Furthermore, we demonstrate semi-free-start collision attacks on the Grøstl-0-256 compression function reduced to 8 rounds and the Grøstl-0-512 compression function reduced to 9 rounds. Finally, we show improved...

  19. An algorithm for the detection of move repetition without the use of hash-keys

    Directory of Open Access Journals (Sweden)

    Vučković Vladan

    2007-01-01

    Full Text Available This paper addresses the theoretical and practical aspects of an important problem in computer chess programming - the problem of draw detection in cases of position repetition. The standard approach used in the majority of computer chess programs is hash-oriented. This method is sufficient in most cases, as the Zobrist keys are already present due to the systemic positional hashing, so that they need not be computed anew for the purpose of draw detection. The new type of the algorithm that we have developed solves the problem of draw detection in cases when Zobrist keys are not used in the program, i.e. in cases when the memory is not hashed.

  20. Does implied volatility of currency futures option imply volatility of exchange rates?

    Science.gov (United States)

    Wang, Alan T.

    2007-02-01

    By investigating currency futures options, this paper provides an alternative economic implication for the result reported by Stein [Overreactions in the options market, Journal of Finance 44 (1989) 1011-1023] that long-maturity options tend to overreact to changes in the implied volatility of short-maturity options. When a GARCH process is assumed for exchange rates, a continuous-time relationship is developed. We provide evidence that implied volatilities may not be the simple average of future expected volatilities. By comparing the term-structure relationship of implied volatilities with the process of the underlying exchange rates, we find that long-maturity options are more consistent with the exchange rates process. In sum, short-maturity options overreact to the dynamics of underlying assets rather than long-maturity options overreacting to short-maturity options.

  1. On Randomizing Hash Functions to Strengthen the Security of Digital Signatures

    DEFF Research Database (Denmark)

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX...... that use Davies-Meyer schemes and a variant of RMX published by NIST in its Draft Special Publication (SP) 800-106. We discuss some important applications of our attack....

  2. Dakota- Hashing from a Combination of Modular Arithmetic and Symmetric Cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars; Thomsen, Søren S

    2008-01-01

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum......In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques...

  3. Dakota – Hashing from a Combination of Modular Arithmetic and Symmetric Cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum......In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques...

  4. Dakota - hashing from a combination of modular arithmetic and symmetric cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum......In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques...

  5. 二次Hash+二分最大匹配快速分词算法%Secondary Hash+ Binsearch Maximal Match Algorithm for Chinese Word Segmentation

    Institute of Scientific and Technical Information of China (English)

    杨安生

    2009-01-01

    通过对已有的分词算法尤其是快速分词算法的分析,提出了一种新的分词词典结构.并据此提出了二次Hash+二分最大匹配快速分词算法.该算法具有较快的分词速度.

  6. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption.

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-29

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  7. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  8. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  9. 基于COS的Hash接口设计与实现%Design and Implementation of Hash Interface Based on COS

    Institute of Scientific and Technical Information of China (English)

    郑斌; 李峥; 王瑞蛟

    2011-01-01

    To solve the problem of the Hash algorithm expansibility based on the Chip Operating System(COS), a flexible Hash interface is designed. The interface which takes the object-oriented thought is made up by Hash algorithm interface and Hash algorithm setting interface. The Hash algorithm interface is set by the Hash algorithm setting interface, which is stored in the EEPROM, to be an instance and has the capability to provide the cryptographic service. The results of experiment show that Hash interface has good expansibility to add other algorithms, and get the purpose to design it.%基于片上操作系统(cos)的Hash函数可扩展性较差.针对该问题,提出一种可重构的Hash接口方法.该方法引入面向对象的概念,由Hash算法接口与Hash算法设置接口2个部分组成,利用存储在EEPROM中的Hash算法设置接口对Hash算法接口进行实例化,使之具备密码服务功能.验证结果表明,该方法具有较强拓展性,能够达到预期设计目标.

  10. Prevention of Cross-Site Scripting Vulnerabilities using Dynamic Hash Generation Technique on the Server Side

    Directory of Open Access Journals (Sweden)

    Shashank Gupta

    2012-09-01

    Full Text Available Cookies are a means to provide statefulcommunication over the HTTP. In the World WideWeb (WWW, once the user using web browser hasbeen successfully authenticated by the web server ofthe web application, then the web server willgenerate and transfer the cookie to the web browser.Now each time, if the user again wants to send arequest to the web server as a part of the activeconnection, the user has to include thecorresponding cookie in its request, so that the webserver associates the cookie to the correspondinguser. Cookies are the mechanisms that maintain anauthentication state between the user and webapplication. Therefore cookies are the possibletargets for the attackers. Cross Site Scripting (XSSattack is one of such attacks against the webapplications in which a user has to compromise itsbrowser’s resources (e.g. cookies etc.. In this paper,a novel technique called Dynamic Hash GenerationTechnique is introduced whose aim is to makecookies worthless for the attackers. This techniqueis implemented on the server side whose main taskis to generate a hash of the value of name attributein the cookie and send this hash value to the webbrowser. With this technique, the hash value ofname attribute in the cookie which is stored on thebrowser’s database is not valid for the attackers toexploit the vulnerabilities of XSS attacks.

  11. Fortification of Transport Layer Security Protocol with Hashed Fingerprint Identity Parameter

    Directory of Open Access Journals (Sweden)

    Kuljeet Kaur

    2012-03-01

    Full Text Available Identity over the public links becomes quiet complex as Client and Server needs proper access rights with authentication. For determining clients identity with password Secured Shell Protocol or Public Key Infrastructure is deployed by various organizations. For end to end transport security SSL (Secured Socket Layer is the de facto standard having Record and Handshake protocol dealing with data integrity and data security respectively. It seems secure but many risks lurk in its use. So focus of the paper would be formulating the steps to be used for the enhancement of SSL. One more tier of security to the transport layer security protocol is added in this research paper by using fingerprints for identity authentication along with password for enhancement of SSL. Bio Hashing which will be done with the help of Minutiae Points at the fingerprints would be used for mutual authentication. New hash algorithm RNA-FINNT is generated in this research paper for converting minutiae points into hashed code. Value of hashed code would be stored at the Database in the Multi Server environment of an organization. Research paper will perform mutual authentication in the multi server environment of an organization with the use of fingerprint and password both as identity authentication parameters. This will strengthen record and handshake protocol which will enhance SSL and further enhancement of SSL will result in the fortification of Transport Layer Security Protocol.

  12. Security analysis of a one-way hash function based on spatiotemporal chaos

    Institute of Scientific and Technical Information of China (English)

    Wang Shi-Hong; Shan Peng-Yang

    2011-01-01

    The collision and statistical properties of a one-way hash function based on spatiotemporal chaos are investigated Analysis and simulation results indicate that collisions exist in the original algorithm and,therefore,the original algorithm is insecure and vulnerable. An improved algorithm is proposed to avoid the collisions.

  13. Broadcast authentication for wireless sensor networks using nested hashing and the Chinese remainder theorem.

    Science.gov (United States)

    Eldefrawy, Mohamed Hamdy; Khan, Muhammad Khurram; Alghathbar, Khaled; Cho, Eun-Suk

    2010-01-01

    Secure broadcasting is an essential feature for critical operations in wireless sensor network (WSNs). However, due to the limited resources of sensor networks, verifying the authenticity for broadcasted messages is a very difficult issue. μTESLA is a broadcast authentication protocol, which uses network-wide loose time synchronization with one-way hashed keys to provide the authenticity verification. However, it suffers from several flaws considering the delay tolerance, and the chain length restriction. In this paper, we propose a protocol which provides broadcast authentication for wireless sensor networks. This protocol uses a nested hash chain of two different hash functions and the Chinese Remainder Theorem (CRT). The two different nested hash functions are employed for the seed updating and the key generation. Each sensor node is challenged independently with a common broadcasting message using the CRT. Our algorithm provides forward and non-restricted key generation, and in addition, no time synchronization is required. Furthermore, receivers can instantly authenticate packets in real time. Moreover, the comprehensive analysis shows that this scheme is efficient and practical, and can achieve better performance than the μTESLA system.

  14. Effects of whey and molasses as silage additives on potato hash ...

    African Journals Online (AJOL)

    Effects of whey and molasses as silage additives on potato hash silage ... by higher concentrations of butyric acid, ammonia-N and pH compared to the other silages. ... inclusion level of 20% without any adverse effect on animal performance.

  15. Rebound Attacks on the Reduced Grøstl Hash Function

    DEFF Research Database (Denmark)

    Mendel, Florian; Rechberger, C.; Schlaffer, Martin;

    2010-01-01

    Grøstl is one of 14 second round candidates of the NIST SHA-3 competition. Cryptanalytic results on the wide-pipe compression function of Grøstl-256 have already been published. However, little is known about the hash function, arguably a much more interesting cryptanalytic setting. Also, Grøstl-...

  16. A reliable p ower management scheme for consistent hashing based distributed key value storage systems#

    Institute of Scientific and Technical Information of China (English)

    Nan-nan ZHAO; Ji-guang WAN; Jun WANG; Chang-sheng XIE

    2016-01-01

    Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%–61%.

  17. Broadcast Authentication for Wireless Sensor Networks Using Nested Hashing and the Chinese Remainder Theorem

    Directory of Open Access Journals (Sweden)

    Eun-Suk Cho

    2010-09-01

    Full Text Available Secure broadcasting is an essential feature for critical operations in wireless sensor network (WSNs. However, due to the limited resources of sensor networks, verifying the authenticity for broadcasted messages is a very difficult issue. μTESLA is a broadcast authentication protocol, which uses network-wide loose time synchronization with one-way hashed keys to provide the authenticity verification. However, it suffers from several flaws considering the delay tolerance, and the chain length restriction. In this paper, we propose a protocol which provides broadcast authentication for wireless sensor networks. This protocol uses a nested hash chain of two different hash functions and the Chinese Remainder Theorem (CRT. The two different nested hash functions are employed for the seed updating and the key generation. Each sensor node is challenged independently with a common broadcasting message using the CRT. Our algorithm provides forward and non-restricted key generation, and in addition, no time synchronization is required. Furthermore, receivers can instantly authenticate packets in real time. Moreover, the comprehensive analysis shows that this scheme is efficient and practical, and can achieve better performance than the μTESLA system.

  18. Evaluating Locality Sensitive Hashing for Matching Partial Image Patches in a Social Media Setting

    Directory of Open Access Journals (Sweden)

    Shaun Bangay

    2014-01-01

    Full Text Available Images posted to a social media site can employ image completion techniques to efficiently and seamlessly remove sensitive content and safeguard privacy. Image completion algorithms typically employ a time consuming patch matching stage derived nearest neighbour search algorithms. Typical patch matching processes perform poorly in the social media context which performs once-off edits on a range of high resolution images with plentiful exemplar material.  We make use of hash tables to accelerate the matching stage. Our refinement is the development of a set of perceptually inspired hash functions that can exploit locality and provide a categorization consistent across any exemplar image. Descriptors derived from principal component analysis (PCA, after training on exemplar database, are used for comparison. Aggregation of descriptors improves accuracy and we adapt a probabilistic approach using randomly oriented hyperplanes to employ multiple descriptors in a single hash table.  Hash table strategies demonstrate a substantial improvement in performance over a brute force strategy, and perceptually inspired features provide levels of accuracy comparable with those trained on the data using PCA descriptors. The aggregation strategies further improve accuracy although measurement of this is confounded by non-uniform distribution of the aggregated keys. Evaluation with increasing levels of missing data demonstrates that the use of hashing continues to perform well relative to the Euclidean metric benchmark.  The patch matching process using aggregated perceptually inspired descriptors produces comparable results with substantial reduction in matching time when used for image completion in photographic images. While sensitivity to structural elements is identified as an issue, the complexity of the resulting process is well suited to bulk manipulation of high resolution images for use in social media.

  19. Collision Free MAC Protocol for Multi-Hop Wireless Networks%面向多跳无线网络的无冲突MAC协议

    Institute of Scientific and Technical Information of China (English)

    张克旺; 张德运; 杜君

    2009-01-01

    Collision rate increases significantly with the appearance of hidden nodes in multi-hop wireless network, which results in unsuccessful transmissions and leads to performance degradation of the network. The RTS/CTS handshakes of 802.11DCF can not eliminate hidden nodes, and the situation becomes worse when the nodes are equipped with higher speed wireless devices, which require higher signal to interference and noise ratio (SINR) for successful reception. This paper analyzes the prevalence of hidden nodes in multi-hop wireless network and proposes an MAC protocol named double channel collision free media access protocol-DCCFMA to solve the hidden nodes problem. DCCFMA is a dual channel MAC (media access control) protocol, receiver adjusts the transmitting power of the control channel according to the signal power receiver from the transmitter so as to cover all hidden nodes around the receiver. Simulation results show that DCCFMA solves hidden nodes problem better than RTS/CTS (ready-to-send/clear-to-send) handshakes and achieves 24% additional network throughput as compared to that of 802.11 DCF.%多跳无线网络中隐藏节点导致节点之间的冲突频繁、数据重传率高、网络吞吐量下降.而802.11 DCF中的RTS/CTS(ready-to-send/clear-to-send)机制不能有效地防止隐藏节点,特别是随着网络中节点通信速率的提高,由于节点的信噪比要求也相应提高,接收节点受到更大范围内隐藏节点的干涉,RTS/CTS机制防止隐藏节点的效率急剧降低.首先,在考虑网络积累干涉以及环境噪音的情况下分析了多跳无线网络中的隐藏节点问题.然后,提出一种双信道无冲突MAC(media access control)协议DCCFMA(double channel collision free media access).DCCFMA协议采用双信道结构,接收节点根据数据信道中发送节点的信号强度动态调节控制信道的发射功率,以完全覆盖接收节点周围所有的隐藏节点,保证接收节点在

  20. Entropy-based implied volatility and its information content

    NARCIS (Netherlands)

    X. Xiao (Xiao); C. Zhou (Chen)

    2016-01-01

    markdownabstractThis paper investigates the maximum entropy approach on estimating implied volatility. The entropy approach also allows to measure option implied skewness and kurtosis nonparametrically, and to construct confidence intervals. Simulations show that the en- tropy approach outperforms t

  1. Constructing a one-way hash function one-way function based on the unified Chaotic system

    Institute of Scientific and Technical Information of China (English)

    Long Min; Peng Fei; Chen Guan-Rong

    2008-01-01

    A new one-way hash function based on the unified chaotic system is constructed.With different values of a key parameter,the unified chaotic system represents different chaotic systems,based on which the one-way hash function algorithm is constructed with three round operations and an initial vector on an input message.In each round operation,the parameters are processed by three different chaotic systems generated from the unified chaotic system.Feed-forwards are used at the end of each round operation and at the end of each element of the message processing.Meanwhile,in each round operation,parameter-exchanging operations are implemented.Then,the hash value of length 160 bits is obtained from the last six parameters.Simulation and analysis both demonstrate that the algorithm has great flexibility,satisfactory hash performance,weak collision property,and high security.

  2. 76 FR 7817 - Announcing Draft Federal Information Processing Standard 180-4, Secure Hash Standard, and Request...

    Science.gov (United States)

    2011-02-11

    ... National Institute of Standards and Technology Announcing Draft Federal Information Processing Standard 180... Draft Federal Information Processing Standard (FIPS) 180-4, Secure Hash Standard (SHS), for public... sent to: Chief, Computer Security Division, Information Technology Laboratory, Attention: Comments...

  3. Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification.

    Science.gov (United States)

    Zhang, Ruimao; Lin, Liang; Zhang, Rui; Zuo, Wangmeng; Zhang, Lei

    2015-12-01

    Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths.

  4. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    Science.gov (United States)

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can

  5. 频繁项集挖掘中的两种哈希树构建方法%Two Methods of Building Hash-tree for Mining Frequent Itemsets

    Institute of Scientific and Technical Information of China (English)

    杜孝平; 罗宪; 唐世渭

    2002-01-01

    Hash-tree is an important data structure used in Apriori-like algorithms for mining frequent itemsets.However, there is no study so far to guarantee the hash-tree could be built successfully every time. In this paper, wepropose a static method and a dynamic one to build the hash-tree. In the two methods, it is easy to decide the size ofhash-table, hash function and the number of itemsets stored in each leaf-node of hash-tree, and the methods ensurethat the hash-tree is built successfully in any cases.

  6. A Hashing-Based Search Algorithm for Coding Digital Images by Vector Quantization

    Science.gov (United States)

    Chu, Chen-Chau

    1989-11-01

    This paper describes a fast algorithm to compress digital images by vector quantization. Vector quantization relies heavily on searching to build codebooks and to classify blocks of pixels into code indices. The proposed algorithm uses hashing, localized search, and multi-stage search to accelerate the searching process. The average of pixel values in a block is used as the feature for hashing and intermediate screening. Experimental results using monochrome images are presented. This algorithm compares favorably with other methods with regard to processing time, and has comparable or better mean square error measurements than some of them. The major advantages of the proposed algorithm are its speed, good quality of the reconstructed images, and flexibility.

  7. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    Science.gov (United States)

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  8. Classifying sets of attributed scattering centers using a hash coded database

    Science.gov (United States)

    Dungan, Kerry E.; Potter, Lee C.

    2010-04-01

    We present a fast, scalable method to simultaneously register and classify vehicles in circular synthetic aperture radar imagery. The method is robust to clutter, occlusions, and partial matches. Images are represented as a set of attributed scattering centers that are mapped to local sets, which are invariant to rigid transformations. Similarity between local sets is measured using a method called pyramid match hashing, which applies a pyramid match kernel to compare sets and a Hamming distance to compare hash codes generated from those sets. By preprocessing a database into a Hamming space, we are able to quickly find the nearest neighbor of a query among a large number of records. To demonstrate the algorithm, we simulated X-band scattering from ten civilian vehicles placed throughout a large scene, varying elevation angles in the 35 to 59 degree range. We achieved better than 98 percent classification performance. We also classified seven vehicles in a 2006 public release data collection with 100% success.

  9. Hash-and-Forward Relaying for Two-Way Relay Channel

    CERN Document Server

    Yilmaz, Erhan

    2011-01-01

    This paper considers a communication network comprised of two nodes, which have no mutual direct communication links, communicating two-way with the aid of a common relay node (RN), also known as separated two-way relay (TWR) channel. We first recall a cut-set outer bound for the set of rates in the context of this network topology assuming full-duplex transmission capabilities. Then, we derive a new achievable rate region based on hash-and-forward (HF) relaying where the RN does not attempt to decode but instead hashes its received signal, and show that under certain channel conditions it coincides with Shannon's inner-bound for the two-way channel [1]. Moreover, for binary adder TWR channel with additive noise at the nodes and the RN we provide a detailed capacity achieving coding scheme based on structure codes.

  10. MapReduce Based Personalized Locality Sensitive Hashing for Similarity Joins on Large Scale Data.

    Science.gov (United States)

    Wang, Jingjing; Lin, Chen

    2015-01-01

    Locality Sensitive Hashing (LSH) has been proposed as an efficient technique for similarity joins for high dimensional data. The efficiency and approximation rate of LSH depend on the number of generated false positive instances and false negative instances. In many domains, reducing the number of false positives is crucial. Furthermore, in some application scenarios, balancing false positives and false negatives is favored. To address these problems, in this paper we propose Personalized Locality Sensitive Hashing (PLSH), where a new banding scheme is embedded to tailor the number of false positives, false negatives, and the sum of both. PLSH is implemented in parallel using MapReduce framework to deal with similarity joins on large scale data. Experimental studies on real and simulated data verify the efficiency and effectiveness of our proposed PLSH technique, compared with state-of-the-art methods.

  11. HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database

    Science.gov (United States)

    Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.

    2016-07-01

    By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.

  12. HASH: the Hong Kong/AAO/Strasbourg H-alpha planetary nebula database

    CERN Document Server

    Parker, Quentin A; Frew, David J

    2016-01-01

    By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic PNe. We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.

  13. Assembling large genomes with single-molecule sequencing and locality-sensitive hashing.

    Science.gov (United States)

    Berlin, Konstantin; Koren, Sergey; Chin, Chen-Shan; Drake, James P; Landolin, Jane M; Phillippy, Adam M

    2015-06-01

    Long-read, single-molecule real-time (SMRT) sequencing is routinely used to finish microbial genomes, but available assembly methods have not scaled well to larger genomes. We introduce the MinHash Alignment Process (MHAP) for overlapping noisy, long reads using probabilistic, locality-sensitive hashing. Integrating MHAP with the Celera Assembler enabled reference-grade de novo assemblies of Saccharomyces cerevisiae, Arabidopsis thaliana, Drosophila melanogaster and a human hydatidiform mole cell line (CHM1) from SMRT sequencing. The resulting assemblies are highly continuous, include fully resolved chromosome arms and close persistent gaps in these reference genomes. Our assembly of D. melanogaster revealed previously unknown heterochromatic and telomeric transition sequences, and we assembled low-complexity sequences from CHM1 that fill gaps in the human GRCh38 reference. Using MHAP and the Celera Assembler, single-molecule sequencing can produce de novo near-complete eukaryotic assemblies that are 99.99% accurate when compared with available reference genomes.

  14. Side channel analysis of some hash based MACs:A response to SHA-3 requirements

    DEFF Research Database (Denmark)

    function which is more resistant to known side channel attacks (SCA) when plugged into HMAC, or that has an alternative MAC mode which is more resistant to known SCA than the other submitted alternatives. In response to this, we perform differential power analysis (DPA) on the possible smart card...... implementations of some of the recently proposed MAC alternatives to NMAC (a fully analyzed variant of HMAC) and HMAC algorithms and NMAC/HMAC versions of some recently proposed hash and compression function modes. We show that the recently proposed BNMAC and KMDP MAC schemes are even weaker than NMAC....../HMAC against the DPA attacks, whereas multi-lane NMAC, EMD MAC and the keyed wide-pipe hash have similar security to NMAC against the DPA attacks. Our DPA attacks do not work on the NMAC setting of MDC-2, Grindahl and MAME compression functions. This talk outlines our results....

  15. ID-based authentication scheme combined with identity-based encryption with fingerprint hashing

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Current identity-based (ID) cryptosystem lacks the mechanisms of two-party authentication and user's private key distribution. Some ID-based signcryption schemes and ID-based authenticated key agreement protocols have been presented, but they cannot solve the problem completely. A novel ID-based authentication scheme based on ID-based encryption (IBE) and fingerprint hashing method is proposed to solve the difficulties in the IBE scheme, which includes message receiver authenticating the sender, the trusted authority (TA) authenticating the users and transmitting the private key to them. Furthermore, the scheme extends the application of fingerprint authentication from terminal to network and protects against fingerprint data fabrication. The fingerprint authentication method consists of two factors. This method combines a token key, for example, the USB key, with the user's fingerprint hash by mixing a pseudo-random number with the fingerprint feature. The security and experimental efficiency meet the requirements of practical applications.

  16. Automated Techniques for Hash Function and Block Cipher Cryptanalysis (Automatische technieken voor hashfunctie- en blokcijfercryptanalyse)

    OpenAIRE

    2012-01-01

    Cryptography is the study of mathematical techniques that ensure the confidentiality and integrity of information. This relatively new field started out as classified military technology, but has now become commonplace in our daily lives. Cryptography is not only used in banking cards, secure websites and electronic signatures, but also in public transport cards, car keys and garage door openers.Two building blocks in the domain of cryptography are block ciphers and (cryptographic) hash funct...

  17. ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function

    Directory of Open Access Journals (Sweden)

    Robson da Silva Lopes

    2015-01-01

    primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multiFASTA file, while allowing a linear time complexity.

  18. Matching of structural motifs using hashing on residue labels and geometric filtering for protein function prediction.

    Science.gov (United States)

    Moll, Mark; Kavraki, Lydia E

    2008-01-01

    There is an increasing number of proteins with known structure but unknown function. Determining their function would have a significant impact on understanding diseases and designing new therapeutics. However, experimental protein function determination is expensive and very time-consuming. Computational methods can facilitate function determination by identifying proteins that have high structural and chemical similarity. Our focus is on methods that determine binding site similarity. Although several such methods exist, it still remains a challenging problem to quickly find all functionally-related matches for structural motifs in large data sets with high specificity. In this context, a structural motif is a set of 3D points annotated with physicochemical information that characterize a molecular function. We propose a new method called LabelHash that creates hash tables of n-tuples of residues for a set of targets. Using these hash tables, we can quickly look up partial matches to a motif and expand those matches to complete matches. We show that by applying only very mild geometric constraints we can find statistically significant matches with extremely high specificity in very large data sets and for very general structural motifs. We demonstrate that our method requires a reasonable amount of storage when employing a simple geometric filter and further improves on the specificity of our previous work while maintaining very high sensitivity. Our algorithm is evaluated on 20 homolog classes and a non-redundant version of the Protein Data Bank as our background data set. We use cluster analysis to analyze why certain classes of homologs are more difficult to classify than others. The LabelHash algorithm is implemented on a web server at http://kavrakilab.org/labelhash/.

  19. Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing Algorithms For Music Information Retrieval

    OpenAIRE

    2010-01-01

    In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is ...

  20. ANALYSIS AND ESTIMATION OF THE TRIE MINIMUM LEVEL IN NON-HASH DEDUPLICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    M. A. Zhukov

    2015-05-01

    Full Text Available Subject of research. The paper deals with a method of restriction for the trie minimum level in non-hash data deduplication system. Method. The subject matter of the method lies in forcibly completing the trie to a specific minimum level. The proposed method makes it possible to increase performance of the process by reducing the number of collisions at the lower levels ofthe trie. The maximum theoretical performance growth corresponds to the share of collisions in the total number of data read operations from the storage medium. Proposed method application increases the metadata size to the amount of new structures containing one element. Main results. The results of the work have been proved by the data of computational experiment with non-has deduplication on 528 GB data set. The process analysis has shown that 99% of the execution time is taken to head positioning of hard-drives. The reason is a random distribution of the blocks on the storage medium. Application of the method of minimum level restriction for the trie in non-hash data deduplication system on the experimental data set gives the possibility to increase performance maximum by 16% and the increase of metadata size is 49%. The total amount of metadata is 34% less than with hash-based deduplication using the MD5 algorithm, and is 17% less than using Tiger192 algorithm. These results confirm the effectiveness of the proposed method. Practical relevance. The proposed method increases the performance of deduplication process by reducing the number of collisions in the trie construction. The results are of practical importance for professionals involved in the development of non-hash data deduplication methods.

  1. A Survey of RFID Authentication Protocols Based on Hash-Chain Method

    CERN Document Server

    Syamsuddin, Irfan; Chang, Elizabeth; Han, Song; 10.1109/ICCIT.2008.314

    2010-01-01

    Security and privacy are the inherent problems in RFID communications. There are several protocols have been proposed to overcome those problems. Hash chain is commonly employed by the protocols to improve security and privacy for RFID authentication. Although the protocols able to provide specific solution for RFID security and privacy problems, they fail to provide integrated solution. This article is a survey to closely observe those protocols in terms of its focus and limitations.

  2. Simulation-Based Performance Evaluation of Predictive-Hashing Based Multicast Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Seonho Choi

    2012-12-01

    Full Text Available A predictive-hashing based Denial-of-Service (DoS resistant multicast authentication protocol was proposed based upon predictive-hashing, one-way key chain, erasure codes, and distillation codes techniques [4, 5]. It was claimed that this new scheme should be more resistant to various types of DoS attacks, and its worst-case resource requirements were derived in terms of coarse-level system parameters including CPU times for signature verification and erasure/distillation decoding operations, attack levels, etc. To show the effectiveness of our approach and to analyze exact resource requirements in various attack scenarios with different parameter settings, we designed and implemented an attack simulator which is platformindependent. Various attack scenarios may be created with different attack types and parameters against a receiver equipped with the predictive-hashing based protocol. The design of the simulator is explained, and the simulation results are presented with detailed resource usage statistics. In addition, resistance level to various types of DoS attacks is formulated with a newly defined resistance metric. By comparing these results to those from another approach, PRABS [8], we show that the resistance level of our protocol is greatly enhanced even in the presence of many attack streams.

  3. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    Science.gov (United States)

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  4. Analysis and Implementation of Cryptographic Hash Functions in Programmable Logic Devices

    Directory of Open Access Journals (Sweden)

    Tautvydas Brukštus

    2016-06-01

    Full Text Available In this day’s world, more and more focused on data pro-tection. For data protection using cryptographic science. It is also important for the safe storage of passwords for this uses a cryp-tographic hash function. In this article has been selected the SHA-256 cryptographic hash function to implement and explore, based on fact that it is now a popular and safe. SHA-256 cryp-tographic function did not find any theoretical gaps or conflict situations. Also SHA-256 cryptographic hash function used cryptographic currencies. Currently cryptographic currency is popular and their value is high. For the measurements have been chosen programmable logic integrated circuits as they less effi-ciency then ASIC. We chose Altera Corporation produced prog-rammable logic integrated circuits. Counting speed will be inves-tigated by three programmable logic integrated circuit. We will use programmable logic integrated circuits belong to the same family, but different generations. Each programmable logic integ-rated circuit made using different dimension technology. Choo-sing these programmable logic integrated circuits: EP3C16, EP4CE115 and 5CSEMA5F31. To compare calculations perfor-mances parameters are provided in the tables and graphs. Re-search show the calculation speed and stability of different prog-rammable logic circuits.

  5. Using hardware-assisted geometric hashing for high-speed target acquisition and guidance

    Science.gov (United States)

    Pears, Arnold N.; Pissaloux, Edwige E.

    1997-06-01

    Geometric hashing provides a reliable and transformation independent representation of a target. The characterization of a target object is obtained by establishing a vector basis relative to a number of interest points unique to the target. The number of basis points required is a function of the dimensionality of the environment in which the technique is being used. This basis is used to encode the other points in the object constructing a highly general (transformation independent) representation of the target. The representation is invariant under both affine and geometric transformations of the target interest points. Once a representation of the target has been constructed a simple voting algorithm can be used to examine sets of interest points extracted from subsequent image in order to determine the possible presence and location of that target. Once an instance of the object has been located further computation can be undertaken to determine its scale, orientation, and deformation due to changes in the parameters related to the viewpoint. This information can be further analyzed to provide guidance. This paper discusses the complexity measures associated with task division and target image processing using geometric hashing. These measures are used to determine the areas which will most benefit from hardware assistance, and possible parallelism. These issues are discussed in the context of an architecture design, and a high speed (hardware assisted) geometric hashing approach to target recognition is proposed.

  6. Wave-atoms-based multipurpose scheme via perceptual image hashing and watermarking.

    Science.gov (United States)

    Liu, Fang; Fu, Qi-Kai; Cheng, Lee-Ming

    2012-09-20

    This paper presents a novel multipurpose scheme for content-based image authentication and copyright protection using a perceptual image hashing and watermarking strategy based on a wave atom transform. The wave atom transform is expected to outperform other transforms because it gains sparser expansion and better representation for texture than other traditional transforms, such as wavelet and curvelet transforms. Images are decomposed into multiscale bands with a number of tilings using the wave atom transform. Perceptual hashes are then extracted from the features of tiling in the third scale band for the purpose of content-based authentication; simultaneously, part of the selected hashes are designed as watermarks, which are embedded into the original images for the purpose of copyright protection. The experimental results demonstrate that the proposed scheme shows great performance in content-based authentication by distinguishing the maliciously attacked images from the nonmaliciously attacked images. Moreover, watermarks extracted from the proposed scheme also achieve high robustness against common malicious and nonmalicious image-processing attacks, which provides excellent copyright protection for images.

  7. Hash sorter - firmware implementation and an application for the Fermilab BTeV level 1 trigger system

    Energy Technology Data Exchange (ETDEWEB)

    Jinyuan Wu et al.

    2003-11-05

    A hardware hash sorter for the Fermilab BTeV Level 1 trigger system will be presented. The has sorter examines track-segment data before the data are sent to a system comprised of 2500 Level 1 processors, and rearranges the data into bins based on the slope of track segments. They have found that by using the rearranged data, processing time is significantly reduced allowing the total number of processors required for the Level 1 trigger system to be reduced. The hash sorter can be implemented in an FPGA that is already included as part of the design of the trigger system. Hash sorting has potential applications in a broad area in trigger and DAQ systems. It is a simple O(n) process and is suitable for FPGA implementation. Several implementation strategies will also be discussed in this document.

  8. Electrodermal responses to implied versus actual violence on television.

    Science.gov (United States)

    Kalamas, A D; Gruber, M L

    1998-01-01

    The electrodermal response (EDR) of children watching a violent show was measured. Particular attention was paid to the type of violence (actual or implied) that prompted an EDR. In addition, the impact of the auditory component (sounds associated with violence) of the show was evaluated. Implied violent stimuli, such as the villain's face, elicited the strongest EDR. The elements that elicited the weakest responses were the actual violent stimuli, such as stabbing. The background noise and voices of the sound track enhanced the total number of EDRs. The results suggest that implied violence may elicit more fear (as measured by EDRs) than actual violence does and that sounds alone contribute significantly to the emotional response to television violence. One should not, therefore, categorically assume that a show with mostly actual violence evokes less fear than one with mostly implied violence.

  9. Quantum mechanics, by itself, implies perception of a classical world

    CERN Document Server

    Blood, Casey

    2010-01-01

    Quantum mechanics, although highly successful, has two peculiarities. First, in many situations it gives more than one potential version of reality. And second, the wave function for a macroscopic object such as a baseball can be spread out over a macroscopic distance. In the first, quantum mechanics seems to imply that the observer will perceive more than one version of reality and in the second it seems to imply we should see spread-out, blurred objects instead of sharply delineated baseballs. But neither implication is true. Quantum mechanics, by itself, implies more than one version of reality will never be reportably perceived, and it implies the perceived position of a baseball will always be sharply defined. Further, two observers will never disagree on what they perceive. Thus quantum mechanics, by itself, with no assumption of particles or collapse, always leads to the perception of a classical-appearing universe.

  10. An Improved Estimator For Black-Scholes-Merton Implied Volatility

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried)

    2004-01-01

    textabstractWe derive an estimator for Black-Scholes-Merton implied volatility that, when compared to the familiar Corrado & Miller [JBaF, 1996] estimator, has substantially higher approximation accuracy and extends over a wider region of moneyness.

  11. Accelerating SPARQL queries by exploiting hash-based locality and adaptive partitioning

    KAUST Repository

    Al-Harbi, Razen

    2016-02-08

    State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.

  12. Implied volatility transmissions between Thai and selected advanced stock markets

    OpenAIRE

    Thakolsri, Supachok; Sethapramote, Yuthana; Jiranyakul, Komain

    2015-01-01

    This paper investigates the impacts of changes in the U. S. implied volatility on the changes in implied volatilities of the Euro and Thai stock markets. For that purpose, volatilities implicit in stock index option prices from the U. S., Euro and Thai stock markets are analyzed using the standard Granger causality test, impulse response analysis, and variance decompositions. The results found in this study suggest that the U. S. stock market is the leading source of volatility transmissions ...

  13. Robust hash-based image watermarking with resistance to geometric distortions and watermark-estimation attack

    Science.gov (United States)

    Lu, Chun-Shien; Sun, Shih-Wei; Chang, Pao-Chi

    2005-03-01

    Digital watermarking provides a feasible way for copyright protection of multimedia. The major disadvantage of the existing methods is their limited resistance to both extensive geometric distortions and watermark-estimation attack (WEA). In view of this fact, this paper aims to propose a robust image watermarking scheme that can withstand geometric distortions and WEA simultaneously. Our scheme is mainly composed of two components: (i) mesh generation and embedding for resisting geometric distortions; and (ii) construction of hash-based content-dependent watermark (CDW) for resisting WEA. Extensive experimental results obtained from standard benchmark confirm the ability of our method in improving robustness.

  14. AMJoin: An Advanced Join Algorithm for Multiple Data Streams Using a Bit-Vector Hash Table

    Science.gov (United States)

    Kwon, Tae-Hyung; Kim, Hyeon-Gyu; Kim, Myoung-Ho; Son, Jin-Hyun

    A multiple stream join is one of the most important but high cost operations in ubiquitous streaming services. In this paper, we propose a newly improved and practical algorithm for joining multiple streams called AMJoin, which improves the multiple join performance by guaranteeing the detection of join failures in constant time. To achieve this goal, we first design a new data structure called BiHT (Bit-vector Hash Table) and present the overall behavior of AMJoin in detail. In addition, we show various experimental results and their analyses for clarifying its efficiency and practicability.

  15. A motif extraction algorithm based on hashing and modulo-4 arithmetic.

    Science.gov (United States)

    Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh

    2008-01-01

    We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs.

  16. Comparison of Various Similarity Measures for Average Image Hash in Mobile Phone Application

    Science.gov (United States)

    Farisa Chaerul Haviana, Sam; Taufik, Muhammad

    2017-04-01

    One of the main issue in Content Based Image Retrieval (CIBR) is similarity measures for resulting image hashes. The main key challenge is to find the most benefits distance or similarity measures for calculating the similarity in term of speed and computing costs, specially under limited computing capabilities device like mobile phone. This study we utilize twelve most common and popular distance or similarity measures technique implemented in mobile phone application, to be compared and studied. The results show that all similarity measures implemented in this study was perform equally under mobile phone application. This gives more possibilities for method combinations to be implemented for image retrieval.

  17. UnoHop: Efficient Distributed Hash Table with O(1 Lookup Performance

    Directory of Open Access Journals (Sweden)

    Herry Sitepu

    2008-05-01

    Full Text Available Distributed Hash Tables (DHTs with O(1 lookup performance strive to minimize the maintenance traffic which required for propagating membership changes information (events. These events distribution allows each node in the peer-to-peer network maintains accurate routing tables with complete membership information. We present UnoHop, a novel DHT protocol with O(1 lookup performance. The protocol uses an efficient mechanism to distribute events through a dissemination tree that constructed dynamically rooted at the node that detect the events. Our protocol produces symmetric bandwidth usage at all nodes while decreasing the events propagation delay.

  18. A Survey Paper on Deduplication by Using Genetic Algorithm Alongwith Hash-Based Algorithm

    Directory of Open Access Journals (Sweden)

    Miss. J. R. Waykole

    2014-01-01

    Full Text Available In today‟s world, by increasing the volume of information available in digital libraries, most of the system may be affected by the existence of replicas in their warehouses. This is due to the fact that, clean and replica-free warehouse not only allow the retrieval of information which is of higher quality but also lead to more concise data and reduces computational time and resources to process this data. Here, we propose a genetic programming approach along with hash-based similarity i.e, with MD5 and SHA-1 algorithm. This approach removes the replicas data and finds the optimization solution to deduplication of records.

  19. Research of Integrity and Authentication in OPC UA Communication Using Whirlpool Hash Function

    Directory of Open Access Journals (Sweden)

    Kehe Wu

    2015-08-01

    Full Text Available Currently, the demand for information security of industrial control systems is becoming more and more urgent, but the security model proposed by OPC UA cannot meet the practical requirements of industrial control systems. For this reason, this paper proposes a new security communication model to provide integrity and authentication in OPC UA. This model uses the Whirlpool hash function to check integrity and generates digital signature along with RSA in message transmission. Compared to SHA-1, Whirlpool has a higher calculation speed and lower collision rate. Through this model, terminals in the upper layer can communicate with field devices via a channel with high security and efficiency.

  20. A hash based mutual RFID tag authentication protocol in telecare medicine information system.

    Science.gov (United States)

    Srivastava, Keerti; Awasthi, Amit K; Kaul, Sonam D; Mittal, R C

    2015-01-01

    Radio Frequency Identification (RFID) is a technology which has multidimensional applications to reduce the complexity of today life. Everywhere, like access control, transportation, real-time inventory, asset management and automated payment systems etc., RFID has its enormous use. Recently, this technology is opening its wings in healthcare environments, where potential applications include patient monitoring, object traceability and drug administration systems etc. In this paper, we propose a secure RFID-based protocol for the medical sector. This protocol is based on hash operation with synchronized secret. The protocol is safe against active and passive attacks such as forgery, traceability, replay and de-synchronization attack.

  1. MULTIMEDIA DATA TRANSMISSION THROUGH TCP/IP USING HASH BASED FEC WITH AUTO-XOR SCHEME

    Directory of Open Access Journals (Sweden)

    R. Shalin

    2012-09-01

    Full Text Available The most preferred mode for communication of multimedia data is through the TCP/IP protocol. But on the other hand the TCP/IP protocol produces huge packet loss unavoidable due to network traffic and congestion. In order to provide a efficient communication it is necessary to recover the loss of packets. The proposed scheme implements Hash based FEC with auto XOR scheme for this purpose. The scheme is implemented through Forward error correction, MD5 and XOR for providing efficient transmission of multimedia data. The proposed scheme provides transmission high accuracy, throughput and low latency and loss.

  2. A Chaotic Communication Scheme Based on Generalized Synchronization and Hash Functions

    Institute of Scientific and Technical Information of China (English)

    XU Jiang-Feng; MIN Le-Quan; CHEN Guan-Rong

    2004-01-01

    @@ A new chaotic communication scheme based on generalized chaotic synchronization (GCS) and hash function transpositions is presented. The communication scheme has nonsymmetric secrete keys and its ability is similar to traditional digital signatures, i.e. a receiver can convince himself whether or not the sender's message contents have been modified. As a direct application of the scheme, a GCS system is designed by using Chen's chaotic circuit and is studied in some detail. The numerical simulation shows that this Chen GCS system has high security and is fast and reliable for secure Internet communications.

  3. A pattern recognition scheme for large curvature circular tracks and an FPGA implementation using hash sorter

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jin-Yuan; Shi, Z.; /Fermilab

    2004-12-01

    Strong magnetic field in today's colliding detectors causes track recognition more difficult due to large track curvatures. In this document, we present a global track recognition scheme based on track angle measurements for circular tracks passing the collision point. It uses no approximations in the track equation and therefore is suitable for both large and small curvature tracks. The scheme can be implemented both in hardware for lower-level trigger or in software for higher-level trigger or offline analysis codes. We will discuss an example of FPGA implementations using ''hash sorter''.

  4. A Class of Hash Functions Based on Quawigroups%一类基于拟群的Hash函数

    Institute of Scientific and Technical Information of China (English)

    池相会; 徐允庆

    2012-01-01

    Hash functions are encryption algorithms used in information security.In this paper,using the theory of finite field and residue class ring,a class of hash functions based on quasigroups is given,the analysis of the security is also presented.%基于Hash函数是用于信息安全领域中的加密算法,因此利用剩余类环和有限域理论给出一种基于拟群运算的具有良好抗碰撞性的Hash函数,并对其安全性作出分析.

  5. Optimal lower bounds for locality sensitive hashing (except when q is tiny)

    CERN Document Server

    O'Donnell, Ryan; Zhou, Yuan

    2009-01-01

    We study lower bounds for Locality Sensitive Hashing (LSH) in the strongest setting: point sets in {0,1}^d under the Hamming distance. Recall that here H is said to be an (r, cr, p, q)-sensitive hash family if all pairs x, y in {0,1}^d with dist(x,y) at most r have probability at least p of collision under a randomly chosen h in H, whereas all pairs x, y in {0,1}^d with dist(x,y) at least cr have probability at most q of collision. Typically, one considers d tending to infinity, with c fixed and q bounded away from 0. For its applications to approximate nearest neighbor search in high dimensions, the quality of an LSH family H is governed by how small its "rho parameter" rho = ln(1/p)/ln(1/q) is as a function of the parameter c. The seminal paper of Indyk and Motwani showed that for each c, the extremely simple family H = {x -> x_i : i in d} achieves rho at most 1/c. The only known lower bound, due to Motwani, Naor, and Panigrahy, is that rho must be at least .46/c (minus o_d(1)). In this paper we show an opt...

  6. DATA INTEGRITY IN THE AUTOMATED SYSTEM BASED ON LINEAR SYSTEMS HASHES

    Directory of Open Access Journals (Sweden)

    Savin S. V.

    2015-12-01

    Full Text Available To protect your data (data integrity in the automated systems, we provide a solution of the problem, which is to reduce redundancy control of information (hash codes, electronic signatures. We impose restrictions on the maximum number of violations of the integrity of the records in the data block. It is known, that with an increase in data protection the amount of control information (coefficient of redundancy also increases. We introduce the concept of linear systems of hash codes (LSHC. On the basis of the mathematical apparatus of the theory of systems of vectors we have developed an algorithm for constructing LSHC, which allows (for a given level of data protection, i.e. integrity to reduce the redundancy of the control information. Rules (principles of building LSHC comply with the rules of construction in coding theory (Hamming codes. The article provides an algorithm for data integrity in LSHC. The use of algorithms ensures the necessary level of data protection and the requirements specification of customers

  7. Fast Structural Alignment of Biomolecules Using a Hash Table, N-Grams and String Descriptors

    Directory of Open Access Journals (Sweden)

    Robert Preissner

    2009-04-01

    Full Text Available This work presents a generalized approach for the fast structural alignment of thousands of macromolecular structures. The method uses string representations of a macromolecular structure and a hash table that stores n-grams of a certain size for searching. To this end, macromolecular structure-to-string translators were implemented for protein and RNA structures. A query against the index is performed in two hierarchical steps to unite speed and precision. In the first step the query structure is translated into n-grams, and all target structures containing these n-grams are retrieved from the hash table. In the second step all corresponding n-grams of the query and each target structure are subsequently aligned, and after each alignment a score is calculated based on the matching n-grams of query and target. The extendable framework enables the user to query and structurally align thousands of protein and RNA structures on a commodity machine and is available as open source from http://lajolla.sf.net.

  8. Provable Data Possession Scheme based on Homomorphic Hash Function in Cloud Storage

    Directory of Open Access Journals (Sweden)

    Li Yu

    2016-01-01

    Full Text Available Cloud storage can satisfy the demand of accessing data at anytime, anyplace. In cloud storage, only when the users can verify that the cloud storage server possesses the data correctly, users shall feel relax to use cloud storage. Provable data possession(PDP makes it easy for a third party to verify whether the data is integrity in the cloud storage server. We analyze the existing PDP schemes, find that these schemes have some drawbacks, such as computationally expensive, only performing a limited number provable data possession. This paper proposes a provable data possession scheme based on homomorphic hash function according to the problems exist in the existing algorithms. The advantage of homomorphic hash function is that it provides provable data possession and data integrity protection. The scheme is a good way to ensure the integrity of remote data and reduce redundant storage space and bandwidth consumption on the premise that users do not retrieve data. The main cost of the scheme is in the server side, it is suitable for mobile devices in the cloud storage environments. We prove that the scheme is feasible by analyzing the security and performance of the scheme.

  9. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  10. Implication of Secure Micropayment System Using Process Oriented Structural Design by Hash chaining in Mobile Network

    Directory of Open Access Journals (Sweden)

    Chitra Kiran N.

    2012-01-01

    Full Text Available The proposed system presents a novel approach of designing a highly secured and robust process oriented architecture for micropayment system in wireless adhoc network. Deployment of any confidential transaction over dynamic nature of wireless adhoc network will strike a high amount of security challenges which is very difficult to identify which poses a great difficulty in designing and effective countermeasures. The current work designs the security process using hash chain and Simple Public Key Infrastructure to be implemented on newly designed digital agreement of broker along with paving new secure routing for secure m-transaction as an efficient alternative for digital coin. The system stimulates the intermediate nodes to cooperate for facilitating secure and reliable transaction from source to destination nodes. The system consists of high end encryption using hash function is also independent of any Trusted Third Party when the network topology frequency changes, thereby it is flexible, lightweight, and reliable for secure micropayment systems. The analysis result shows the system is highly robust and secure ensuring anonymity, privacy, non-repudiation offline payment system over wireless adhoc network.

  11. Hashing hyperplane queries to near points with applications to large-scale active learning.

    Science.gov (United States)

    Vijayanarasimhan, Sudheendra; Jain, Prateek; Grauman, Kristen

    2014-02-01

    We consider the problem of retrieving the database points nearest to a given hyperplane query without exhaustively scanning the entire database. For this problem, we propose two hashing-based solutions. Our first approach maps the data to 2-bit binary keys that are locality sensitive for the angle between the hyperplane normal and a database point. Our second approach embeds the data into a vector space where the euclidean norm reflects the desired distance between the original points and hyperplane query. Both use hashing to retrieve near points in sublinear time. Our first method's preprocessing stage is more efficient, while the second has stronger accuracy guarantees. We apply both to pool-based active learning: Taking the current hyperplane classifier as a query, our algorithm identifies those points (approximately) satisfying the well-known minimal distance-to-hyperplane selection criterion. We empirically demonstrate our methods' tradeoffs and show that they make it practical to perform active selection with millions of unlabeled points.

  12. Matching Aerial Images to 3d Building Models Based on Context-Based Geometric Hashing

    Science.gov (United States)

    Jung, J.; Bang, K.; Sohn, G.; Armenakis, C.

    2016-06-01

    In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs) of a single image. This model-to-image matching process consists of three steps: 1) feature extraction, 2) similarity measure and matching, and 3) adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  13. Hash-Based Line-by-Line Template Matching for Lossless Screen Image Coding.

    Science.gov (United States)

    Xiulian Peng; Jizheng Xu

    2016-12-01

    Template matching (TM) was proposed in the literature a decade ago to efficiently remove non-local redundancies within an image without transmitting any overhead of displacement vectors. However, the large computational complexity introduced at both the encoder and the decoder, especially for a large search range, limits its widespread use. This paper proposes a hash-based line-by-line template matching (hLTM) for lossless screen image coding, where the non-local redundancy commonly exists in text and graphics parts. By hash-based search, it can largely reduce the search complexity of template matching without an accuracy degradation. Besides, the line-by-line template matching increases prediction accuracy by using a fine granularity. Experimental results show that the hLTM can significantly reduce both the encoding and decoding complexities by 68 and 23 times, respectively, compared with the traditional TM with a search radius of 128. Moreover, when compared with High Efficiency Video Coding screen content coding test model SCM-1.0, it can largely improve coding efficiency by up to 12.68% bits saving on screen contents with rich texts/graphics.

  14. Backyard Cuckoo Hashing: Constant Worst-Case Operations with a Succinct Representation

    CERN Document Server

    Arbitman, Yuriy; Segev, Gil

    2009-01-01

    The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. In this paper we settle two fundamental open problems: - We construct the first dynamic dictionary that enjoys the best of both worlds: we present a two-level variant of cuckoo hashing that stores n elements using (1 + epsilon)n memory words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any epsilon = Omega((log log n / log n)^{1/2}) and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of epsilon. The construction is based on augmenting cuckoo hashing with a "ba...

  15. Two-factor authentication system based on optical interference and one-way hash function

    Science.gov (United States)

    He, Wenqi; Peng, Xiang; Meng, Xiangfeng; Liu, Xiaoli

    2012-10-01

    We present a two-factor authentication method to verify the personal identification who tries to access an optoelectronic system. This method is based on the optical interference principle and the traditional one-way Hash function (e.g. MD5). The authentication process is straightforward, the phase key and the password-controlled phase lock of one user are loading on two Spatial Light Modulators (SLMs) in advance, by which two coherent beams are modulated and then interference with each other at the output plane leading to an output image. By comparing the output image with all the standard certification images in the database, the system can thus verify the user's identity. However, the system designing process involves an iterative Modified Phase Retrieval Algorithm (MPRA). For an uthorized user, a phase lock is first created based on a "Digital Fingerprint (DF)", which is the result of a Hash function on a preselected user password. The corresponding phase key can then be determined by use of the phase lock and a designated standard certification image. Note that the encode/design process can only be realized by digital means while the authentication process could be achieved digitally or optically. Computer simulations were also given to validate the proposed approach.

  16. Research of RFID Certification Security Protocol based on Hash Function and DES Algorithm

    Directory of Open Access Journals (Sweden)

    bin Xu

    2013-10-01

    Full Text Available RFID has been more and more attention and application by people, but the existence of security and privacy problems worthy of attention is concern. The certification process analysis of several typical security protocols is based on existing RFID authentication protocol. It proposed an improved bidirectional authentication algorithm. The use of one-way HASH function can solve the security problem of RFID. The protocol has anti-replay, impedance analysis, forgery, and tracking performance, and is suitable for the distributed system. With the development of computer and Internet is widely used in various industries, interaction of high-speed information transfer process. The problem of information security is concern. The paper produce and use all kinds of algorithms based on hash function firstly. Then as information on a solid safety lock, MD5, SHA-1 file verification, encryption, digital signature, PKI building has security full of all kinds of information. Finally, it can effectively prevent the attack, ensuring the authenticity of the information not to be modified or leaks

  17. Deeply learnt hashing forests for content based image retrieval in prostate MR images

    Science.gov (United States)

    Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin

    2016-03-01

    Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.

  18. Self-Organized Hash Based Secure Multicast Routing Over Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Amit Chopra

    2016-02-01

    Full Text Available Multicast group communication over mobile ad hoc networks has various challenges related to secure data transmission. In order to achieve this goal, there is a need to authenticate the group member as well as it is essential to protect the application data, routing information, and other network resources etc. Multicast-AODV (MAODV is the extension of an AODV protocol, and there are several issues related to each multicast network operation. In the case of dynamic group behavior, it becomes more challenging to protect the resources of a particular group. Researchers have developed different solutions to secure multicast group communication, and their proposed solutions can be used for resource protection at different layers i.e. application layer, physical layer, network layer, etc. Each security solution can guard against a particular security threat. This research paper introduced a self-organized hash based secure routing scheme for multicast ad hoc networks. It uses group Diffie-Hellman method for key distribution. Route authentication and integrity, both are ensured by generating local flag codes and global hash values. In the case of any violation, route log is monitored to identify the malicious activities.

  19. Protocol of Secure Key Distribution Using Hash Functions and Quantum Authenticated Channels (KDP-6DP

    Directory of Open Access Journals (Sweden)

    Mohammed M.A. Majeed

    2010-01-01

    Full Text Available Problem statement: In previous researches, we investigated the security of communication channels, which utilizes authentication, key distribution between two parties, error corrections and cost establishment. In the present work, we studied new concepts of Quantum Authentication (QA and sharing key according to previous points. Approach: This study presented a new protocol concept that allows the session and key generation on-site by independently applying a cascade of two hash functions on a random string of bits at the sender and receiver sides. This protocol however, required a reliable method of authentication. It employed an out-of-band authentication methodology based on quantum theory, which uses entangled pairs of photons. Results: The proposed quantum-authenticated channel is secure in the presence of eavesdropper who has access to both the classical and the quantum channels. Conclusion/Recommendations: The key distribution process using cascaded hash functions provides better security. The concepts presented by this protocol represent a valid approach to the communication security problem.

  20. 基于神经网络模型的双混沌 Hash 函数构造%A Dual Chaotic Hash Function Based on Cellular Neural Network

    Institute of Scientific and Technical Information of China (English)

    刘慧; 赵耿; 白健

    2014-01-01

    高效快速的单向Hash函数是当前安全技术研究的热点。文章采用神经网络结构构造了一种Hash函数,由Logistic映射和Chebyshev映射结合起来的双混沌系统产生该神经网络的参数,将明文信息逐块进行处理,并最终通过异或产生128 bit的Hash值。经实验数据和仿真分析可知:文章提出的方案满足单向Hash函数所要求的混乱和置换特性,并且具有很好的弱碰撞性和初值敏感性;另外,该方案结构简单容易实现。%The Hash function with high speed and efifciency has been a hotspot of security. In this paper, a new Hash function based on cellular neural network was proposed. The parameters of the cellular neural network were produced by a unique system which combined the Logistic map with the Chebyshev map. The function can handle the plaintext by the block, and the ifnal 128 Hash value is the xor of every block’s Hash value. The experimental data and simulated analysis show that the proposed algorithm can satisfy the requirements of a secure hash function, and it has some good properties such as diffusion, confusion, weak collision and sensitivity to initial conditions. What’s more, the construction of the scheme can be achieved easily.

  1. 基于耦合动态整数帐篷映象格子模型的轻量级 Hash 函数%A Lightweight Hash Function Based on Coupled Dynamic Integral Tent Map Lattice Model

    Institute of Scientific and Technical Information of China (English)

    张啸; 刘建东; 商凯; 胡辉辉

    2016-01-01

    基于耦合动态整数帐篷映象格子模型构造了一种适用于 RFID 认证系统的轻量级 Hash 函数。该算法具有输出任意字节长度散列值的能力,定义在整数集上,克服了目前主流混沌密码算法需要进行浮点运算的缺陷,适用于硬件资源有限的系统。实验及仿真分析结果表明,该 Hash 函数具有较高的安全性,能够满足 RFID 认证系统的安全需求。%We design a lightweight Hash function based on coupled dynamic integral tent map lattice model,which applies to the authentication process between tags and reader in RFID system.Any length in byte of Hash value can be put out through the algorithm.What is more, it is suitable for the limited hardware resource system,because the domain of definition in algorithm is the integer set.Therefore,it overcomes the shortcoming which needs floating operation in most main-stream chaotic cryptographic algorithms.Experiment and simulation analysis show that the proposed Hash function has a high level of security,which satisfies the security requirement of the authentication system in RFID.

  2. Studi Pengamanan Login Pada Sistem Informasi Akademik Menggunakan Otentifikasi One Time Password Berbasisis SMS dengan Hash MD5

    Directory of Open Access Journals (Sweden)

    Kartika Imam Santoso

    2016-01-01

    Full Text Available Pengamanan login untuk mengakses Sistem Informasi Akademik berbasis WEB, berupa pengamanan menggunakan OTP(One Time Password yang di bangkitkan dengan Hash MD5 yang menghasilkan sebuah kode lewat SMS untuk otentikasi.Aplikasi OTP menggunakan masukan untuk hash MD5 dari tabel mahasiswa yang diambil adalah field NIM, No telp, danwaktu akses. Hasil dari fungsi hash tersebut menghasilkan 32 digit bilangan hexadesimal, kemudian mengganti denganangka bila ditemukan huruf di dalamnya. Selanjutnya diambil enam digit dari bilangan tersebut. Enam angka tersebut yangdikirimkan sebagai OTP dengan layanan aplikasi Gammu berupa SMS dan juga disimpan dalam tabel. OTP yang dikirimkankepada pengguna akan dicocokkan dengan yang tersimpan dalam tabel untuk mengecek validitasnya. Apabila cocok antaraOTP yang dikirimkan dengan yang tersimpan dalam tabel, maka pengguna baru bisa mengakses Sistem Informasi Akademik(SIAKAD. OTP yang dihasilkan adalah untuk otentifikasi pengamanan akun pengguna SIAKAD setelah Login denganmemasukkan username dan password. Waktu aktif untuk pengamanan login dengan OTP berbasis SMS selama tiga menit,pembatasan tersebut adalah untuk mempersempit waktu hacker untuk menyadap dan menyusup. Selain itu juga sesuai denganuji coba yang telah dilakukan dengan beberapa layanan operator selular di IndonesiaKata kunci : Sistem Informasi Akademik; Login, Hash MD5; One Time Password; SMS; Gammu

  3. Design and Analysis of Multivariate Hash Function%多变元Hash函数的构造与分析

    Institute of Scientific and Technical Information of China (English)

    王后珍; 张焕国; 杨飚

    2011-01-01

    本文在基于非线性多变元多项式方程组难解性的基础上,提出了一种新的Hash算法,新算法与目前广泛使用的Hash算法相比具有下列优点:安全性基于一个公认的数学难题;输出Hash值长度可变;引入了整体随机性,从一族Hash函数中随机选择Hash函数而不是随机化消息本身;设计自动化,用户可根据实际需求构造满足其特定要求的Hash函数.本文还详细讨论了新Hash算法的安全性、效率和性能,并通过仿真实验,指出了新算法的具体构造方法.实验结果表明,新算法在效率和性能方面与其它Hash函数具有可比性.%The novel Hash algorithm whose security is based on the difficult of multivariate polynomial equations over a finite field is designed and implemented. We propose the idea of building a secure hash using higher degree multivariate polynomials as the compression function of MPH. The new algorithm compared with the current widespread use of the Hash algorithms has the following advantages:Security based on a recognized difficult problem of mathematics;Hash length can be free to change,according to the needs of the user;Hash function as a whole is Randomly selected;Design automation,users can be constructed to meet the actual needs of the specific Hash function. We analyze some security properties and potential feasibility, where the compression functions are randomly chosen 3rd polynomials, the experiment results show that the new algorithm has good properties in the efficiency and performance, and is comparable with other Hash functions.

  4. The Forecast Performance of Competing Implied Volatility Measures

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas

    volatility (CIV) measures are explored. For all pair-wise comparisons, it is found that a CIV measure that is closely related to the model-free implied volatility, nearly always delivers the most accurate forecasts for the majority of the firms. This finding remains consistent for different forecast horizons...

  5. Temporal characteristics of neuronal sources for implied motion perception

    NARCIS (Netherlands)

    Lorteije, J.A.M.; Kenemans, J.L.; Jellema, T.; Lubbe, R.H.J. van der; Heer, F. de; Wezel, R.J.A. van

    2004-01-01

    Viewing photographs of objects in motion evokes higher fMRI activation in human MT+ than similar photographs without this implied motion. MT+ is traditionally considered to be involved in motion perception. Therefore, this finding suggests feedback from object-recognition areas to MT+. To investigat

  6. 基于二叉树的反向Hash链遍历%Reverse Hash Chain Traversal Based on Binary Tree

    Institute of Scientific and Technical Information of China (English)

    傅建庆; 吴春明; 吴吉义; 平玲娣

    2012-01-01

    提出了一种反向Hash链遍历的时间、空间复杂度优化算法.采用堆栈操作实现了高效的反向Hash链遍历,并将Hash链遍历过程映射到了二叉树的后序遍历过程,利用二叉树性质对存储和计算性能进行了理论化分析和证明.分析证明结果表明,遍历对长为n的反向Hash链时,算法只需要存储「1bn」+1个节点值,并且进行不多于((「) 1b n」/2+1)n次Hash计算次数.相比同类其他算法,该算法并不要求链长为2的整数次方.通过对算法进行基于k叉树(k≥3)的扩展,进一步将存储空间降低到(「)logk[(k-1)n+1]」,但总计算次数提高到[(「)log[(k-1)n+1]」-1)k/2+1]n;通过在算法执行前先把Hash链平分为p段(P≥2),将总计算次数降低到([1b(n/p)]/2+1)n,但是所需的存储空间提高到([1b(n/p)]+1)p.%An algorithm improving the time and space complexity of reverse Hash chain traversal is proposed. By mapping the traversal of a reverse Hash chain to the postorder traversal of a binary tree, the proposed algorithm reduces the product of the total times of Hash operations and the storage space required to O(n(lb n)2), where n is the length of the reverse Hash chain. Analysis and proof using the property of the binary tree show that the proposed algorithm requires to save only [lbn] + l nodesat the same time, and needs no more than ([lb nj /2 + l)w times of Hash operations totally. Comparedwith other algorithms, the proposed one can be applied to Hash chains with any length, eliminating the limitation that the length of chain must be of 2 integer-th power. Then an advanced algorithm is proposed by mapping the traversal of a reverse Hash chain to the postorder traversal of a k-ary tree,where k is an integer no less than 3, and the space required is reduced to | logj[(jfe - Dn + l]|, but thetimes of Hash operations required is raised to [( | logk[(k- l)n + 1] | -l)k/2 + l]n. Finally, another advanced algorithm is proposed by splitting Hash chain

  7. Implied Movement in Static Images Reveals Biological Timing Processing

    Directory of Open Access Journals (Sweden)

    Francisco Carlos Nather

    2015-08-01

    Full Text Available Visual perception is adapted toward a better understanding of our own movements than those of non-conspecifics. The present study determined whether time perception is affected by pictures of different species by considering the evolutionary scale. Static (“S” and implied movement (“M” images of a dog, cheetah, chimpanzee, and man were presented to undergraduate students. S and M images of the same species were presented in random order or one after the other (S-M or M-S for two groups of participants. Movement, Velocity, and Arousal semantic scales were used to characterize some properties of the images. Implied movement affected time perception, in which M images were overestimated. The results are discussed in terms of visual motion perception related to biological timing processing that could be established early in terms of the adaptation of humankind to the environment.

  8. The Universal Kolyvagin Recursion Implies the Kolyvagin Recursion

    Institute of Scientific and Technical Information of China (English)

    Yi OUYANG

    2007-01-01

    Let (u)z be the universal norm distribution and M a fixed power of prime p. By using the double complex method employed by Anderson, we study the universal Kolyvagin recursion occurring in the canonical basis in the cohomology group H0(Gz,(u)z/M(u)z). We furthermore show that the universal Kolyvagin recursion implies the Kolyvagin recursion in the theory of Euler systems. One certainly hopes this could lead to a new way to find new Euler systems.

  9. A hybrid cloud read aligner based on MinHash and kmer voting that preserves privacy

    Science.gov (United States)

    Popic, Victoria; Batzoglou, Serafim

    2017-05-01

    Low-cost clouds can alleviate the compute and storage burden of the genome sequencing data explosion. However, moving personal genome data analysis to the cloud can raise serious privacy concerns. Here, we devise a method named Balaur, a privacy preserving read mapper for hybrid clouds based on locality sensitive hashing and kmer voting. Balaur can securely outsource a substantial fraction of the computation to the public cloud, while being highly competitive in accuracy and speed with non-private state-of-the-art read aligners on short read data. We also show that the method is significantly faster than the state of the art in long read mapping. Therefore, Balaur can enable institutions handling massive genomic data sets to shift part of their analysis to the cloud without sacrificing accuracy or exposing sensitive information to an untrusted third party.

  10. DistHash: A robust P2P DHT-based system for replicated objects

    CERN Document Server

    Dobre, Ciprian; Cristea, Valentin

    2011-01-01

    Over the Internet today, computing and communications environments are significantly more complex and chaotic than classical distributed systems, lacking any centralized organization or hierarchical control. There has been much interest in emerging Peer-to-Peer (P2P) network overlays because they provide a good substrate for creating large-scale data sharing, content distribution and application-level multicast applications. In this paper we present DistHash, a P2P overlay network designed to share large sets of replicated distributed objects in the context of large-scale highly dynamic infrastructures. We present original solutions to achieve optimal message routing in hop-count and throughput, provide an adequate consistency approach among replicas, as well as provide a fault-tolerant substrate.

  11. Efficient Query-by-Content Audio Retrieval by Locality Sensitive Hashing and Partial Sequence Comparison

    Science.gov (United States)

    Yu, Yi; Joe, Kazuki; Downie, J. Stephen

    This paper investigates suitable indexing techniques to enable efficient content-based audio retrieval in large acoustic databases. To make an index-based retrieval mechanism applicable to audio content, we investigate the design of Locality Sensitive Hashing (LSH) and the partial sequence comparison. We propose a fast and efficient audio retrieval framework of query-by-content and develop an audio retrieval system. Based on this framework, four different audio retrieval schemes, LSH-Dynamic Programming (DP), LSH-Sparse DP (SDP), Exact Euclidian LSH (E2LSH)-DP, E2LSH-SDP, are introduced and evaluated in order to better understand the performance of audio retrieval algorithms. The experimental results indicate that compared with the traditional DP and the other three compititive schemes, E2LSH-SDP exhibits the best tradeoff in terms of the response time, retrieval accuracy and computation cost.

  12. Tree and Hashing Data Structures to Speed up Chemical Searches: Analysis and Experiments.

    Science.gov (United States)

    Nasr, Ramzi; Kristensen, Thomas; Baldi, Pierre

    2011-09-01

    In many large chemoinformatics database systems, molecules are represented by long binary fingerprint vectors whose components record the presence or absence of particular functional groups or combinatorial features. For a given query molecule, one is interested in retrieving all the molecules in the database with a similarity to the query above a certain threshold. Here we describe a method for speeding up chemical searches in these large databases of small molecules by combining previously developed tree and hashing data structures to prune the search space without any false negatives. More importantly, we provide a mathematical analysis that allows one to predict the level of pruning, and validate the quality of the predictions of the method through simulation experiments.

  13. A Novel Approach for Verifiable Secret Sharing by using a One Way Hash Function

    CERN Document Server

    Parmar, Keyur

    2012-01-01

    Threshold secret sharing schemes do not prevent any malicious behavior of the dealer or shareholders and so we need verifiable secret sharing, to detect and identify the cheaters, to achieve fair reconstruction of a secret. The problem of verifiable secret sharing is to verify the shares distributed by the dealer. A novel approach for verifiable secret sharing is presented in this paper where both the dealer and shareholders are not assumed to be honest. In this paper, we extend the term verifiable secret sharing to verify the shares, distributed by a dealer as well as shares submitted by shareholders for secret reconstruction, and to verify the reconstructed secret. Our proposed scheme uses a one way hash function and probabilistic homomorphic encryption function to provide verifiability and fair reconstruction of a secret.

  14. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing.

    Science.gov (United States)

    Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin

    2016-06-22

    A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  15. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  16. HCAA:一种哈希冲突过度的动态解决算法%HCAA: A HASH COLLISION EXCESSIVE DYNAMIC SOLUTION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    谢云; 柳厅文; 乔登科; 孙永; 刘金刚

    2011-01-01

    As a data structure for rapid lookup, hash table is widely used in network security applications, such as firewalls etc. However, attackers may use some approaches to launch hash attacks towards these aplications to let them stop responding* so that some malicious data flows can escape from management and control of network security applications. The paper introduces a dynamic hash collision excessive solution algorithm named HCAA (Hash Collision Acceptable Algorithm). When hash collisions are too concentrated, the algorithm handles collision data flows by dynamically applying for hash table and making use of different hash functions to confine collisions within an acceptable scope. Experiment results validate that, compared to existing methods, HCAA can obtain more balanced hash effect with less usage of hash table items, so that faster hash operation can be achieved upon data flows.%HASH表作为一种快速查询的数据结构,在防火墙等网络安全应用中得到了广泛的应用.然而,攻击者可能通过一些手段对这些应用发动HASH攻击使其失去响应,从而使某些恶意的数据流能够逃脱网络安全应用的管理和控制.提出一种动态的哈希冲突过度的解决算法—HCAA( Hash Collision-Acceptable Algorithm)算法,该算法在哈希冲突过于集中时通过动态申请HASH表并使用不同哈希函数来对冲突数据流进行处理,使冲突在可接受的范围内.实验结果表明,与已有方法相比,HCAA算法能在使用更少HASH表项的情况下获得更均衡的HASH效果,从而能对数据流进行更快的HASH操作.

  17. Faster than light motion does not imply time travel

    CERN Document Server

    Andréka, H; Németi, I; Stannett, M; Székely, G

    2014-01-01

    Seeing the many examples in the literature of causality violations based on faster-than- light (FTL) signals one naturally thinks that FTL motion leads inevitably to the possibility of time travel. We show that this logical inference is invalid by demonstrating a model, based on (3+1)-dimensional Minkowski spacetime, in which FTL motion is permitted (in every direction without any limitation on speed) yet which does not admit time travel. Moreover, the Principle of Relativity is true in this model in the sense that all observers are equivalent. In short, FTL motion does not imply time travel after all.

  18. Spectroscopic determination of masses (and implied ages) for red giants

    CERN Document Server

    Ness, M; Rix, H-W; Martig, M; Pinsonneault, Marc H; Ho, A Y Q

    2015-01-01

    The mass of a star is arguably its most fundamental parameter. For red giant stars, tracers luminous enough to be observed across the Galaxy, mass implies a stellar evolution age. It has proven to be extremely difficult to infer ages and masses directly from red giant spectra using existing methods. From the KEPLER and APOGEE surveys, samples of several thousand stars exist with high-quality spectra and asteroseismic masses. Here we show that from these data we can build a data-driven spectral model using The Cannon, which can determine stellar masses to $\\sim$ 0.07 dex from APOGEE DR12 spectra of red giants; these imply age estimates accurate to $\\sim$ 0.2 dex (40 percent). We show that The Cannon constrains these ages foremost from spectral regions with CN absorption lines, elements whose surface abundances reflect mass-dependent dredge-up. We deliver an unprecedented catalog of 80,000 giants (including 20,000 red-clump stars) with mass and age estimates, spanning the entire disk (from the Galactic center t...

  19. A Hash Based Remote User Authentication and Authenticated Key Agreement Scheme for the Integrated EPR Information System.

    Science.gov (United States)

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng

    2015-11-01

    To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.

  20. Image hash algorithm based on chaos theory%一种基于混沌的图像hash算法

    Institute of Scientific and Technical Information of China (English)

    肖潇; 胡春强; 邓绍江

    2011-01-01

    To meet the needs of the Image Authentication, proposed image hash algorithm based on chaos theory. To begin with ,encrypted the original image by Logistic map. After that, difference matrix was modulated and quantified, and then obtained the fixed length of hash sequence. It discussed that the image scaling and JPEG compression were influence on the robustness of the hash sequence. It pointed that robust of the proposed scheme against the above attacks when the threshold t is 0.1. The experimental results indicate that the algorithm has the robust to against the above attacks, then the method is an effective for studying image authentication.%为了实现图像认证,提出了基于混沌理论的图像hash算法.首先将原始图像经过置乱得到加密图像,然后对差值矩阵进行调制、量化,得到固定长度的图像hash序列.算法讨论了图像的缩放和JPEG压缩对图像hash序列的影响,当阈值为0.1时,对上述的攻击方法进行了实验,结果表明,图像对这两种攻击具有一定的鲁棒性.

  1. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-03-15

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  2. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    Science.gov (United States)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  3. Does China's Huge External Surplus Imply an Undervalued Renminbi?

    Institute of Scientific and Technical Information of China (English)

    Anthony J. Makin

    2007-01-01

    A pegged exchange rate regime has been pivotal to China's export-led development strategy. However, its huge trade surpluses and massive build up of international reserves have been matched by large deficits for major trading partners, creating acute policy concerns abroad, especially in the USA. This paper provides a straightforward conceptual framework for interpreting the effect of China's exchange rate policy on its own trade balance and that of trading partners in the context of discrepant economic growth rates. It shows how pegging the exchange rate when output is outstripping expenditure induces China's trade surpluses and counterpart deficits for its trading partners. An important corollary is that given its strictly regulated capital account, China's persistently large surpluses imply a significantly undervalued renminbi, which should gradually become more flexible.

  4. Implementation of Hash Algorithm in Network Processor%Hash算法在网络处理器中的实现

    Institute of Scientific and Technical Information of China (English)

    付仲满; 张辉; 李苗; 刘涛

    2014-01-01

    提出一种应用于网络处理器的Hash算法,通过建立新型查找表的结构和构造两级Hash函数,能够有效地解决Hash冲突的问题。描述Hash表的软件建立流程和硬件查找过程,在Hash查找的基础上,给出硬件表项的学习过程和老化方法,简化表项的更新操作。针对不同的应用,建立不同类型的Hash表,合理地利用内外部存储资源,兼顾了存储资源和处理速度的平衡。实验结果表明,该算法对各种查找表中不同的表项数目和关键词长度均具有较好的兼容性,成功查找的平均长度为2,减少了存储器的访存次数,其单个微引擎的查找速度高达25 Mb/s,能够满足网络处理器接口处理带宽20 Gb/s的要求。%A novel Hash algorithm is proposed in this paper for network processor application. It resolves Hash collision problem by constructing new look up table and new two-level Hash function. The software processing and hardware lookup flow of Hash table are descripted, and the learning process and ageing machine for entry of table are designed for simplifying the entry updating operation. For different engineering applications,the algorithm sets up different Hash table, which makes the efficience of memory utilization improved and the tradeoff between memory and processing speed optimized. Simulation results show the algorithm works well despite of the number of table entry and the size of keyword. The average length of look up’ s success is 2 and the memory access times is reduced dramaticlly. The look up speed of micro-engine is improved to 25 Mb/s,satisfing the requinrement of 20 Gb/s bandwidth performance of network processor.

  5. Stringent Mitigation Policy Implied By Temperature Impacts on Economic Growth

    Science.gov (United States)

    Moore, F.; Turner, D.

    2014-12-01

    Integrated assessment models (IAMs) compare the costs of greenhouse gas mitigation with damages from climate change in order to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained GDP growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth-rates in the Dynamic Integrated Climate and Economy (DICE) model via two pathways, total factor productivity (TFP) growth and capital depreciation. Even under optimistic adaptation assumptions, this damage specification implies that optimal climate policy involves the elimination of emissions in the near future, the stabilization of global temperature change below 2°C, and a social cost of carbon (SCC) an order of magnitude larger than previous estimates. A sensitivity analysis shows that the magnitude of growth effects, the rate of adaptation, and the dynamic interaction between damages from warming and GDP are three critical uncertainties and an important focus for future research.

  6. IMPLIED-IN-PRICES EXPECTATIONS: THEIR ROLE IN ARBITRAGE

    Directory of Open Access Journals (Sweden)

    Sergei A. Ivanov

    2014-02-01

    Full Text Available Real prices are created on markets by supply and demand and they do not have to follow some distributions or have some properties, which we often assume. However, prices have to follow some rules in order to make arbitrage impossible. Existence of arbitrage opportunities means existence of inefficiency. Prices always contain expectations about future. Constraints on such expectations and arbitrage mechanisms were investigated with minimum assumptions about price processes (e.g. real prices do not have to be martingales. It was shown that found constraints could be easily failed in some widespread conditions. Fluctuating risk-free interest rates creates excess amount of asset in comparison with case when they are constant. This property allows arbitrage and making risk-free profit. This possibility is hard to use. However, in theory it exists almost on every market. Interest rate is implied in almost every price. The possibility exists where there is uncertainty about future. This leads to assumption that there is very fundamental inefficiency, which potentially is able to change markets dramatically.

  7. Semiotic processes implied in the High Dilution phenomena

    Directory of Open Access Journals (Sweden)

    Gheorghe Jurj

    2011-07-01

    Full Text Available Semiotic processes ( or semiosis refer to those process which are carriers of meaning and are performed through signs. A sign is something that stands for something else, and is the main mediating factor between an object and an interpreter, able to connect them and give rise to significations. The semiotic perspective, according to Charles Sanders Peirce, is basically triadic: all aspects of reality are triadic, comprising the categories of Firstness, Secondness, and Thirdness in a continue and virtually infinite process of semiosis, i.e of various forms of giving rise to meanings that accordingly will generate reactions, actions and transformations. The aim of the present paper is to examine the possible levels of semiosis implied in the high dilution phenomenon, beginning with the process of so called “potency” of substrata (where every potency may be considered a sign for the next one and arriving to the complex responses of the living bodies to infinitesimal signs.

  8. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    Science.gov (United States)

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  9. Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy

    Science.gov (United States)

    Doi, Yusuke; Wakayama, Shirou; Ozaki, Satoshi

    To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.

  10. A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function

    Science.gov (United States)

    Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078

  11. Discrete cosine transform and hash functions toward implementing a (robust-fragile) watermarking scheme

    Science.gov (United States)

    Al-Mansoori, Saeed; Kunhu, Alavi

    2013-10-01

    This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.

  12. A Novel Steganographic Scheme Based on Hash Function Coupled With AES Encryption

    Directory of Open Access Journals (Sweden)

    Rinu Tresa M J

    2014-04-01

    Full Text Available In the present scenario the use of images increased extremely in the cyber world so that we can easily transfer data with the help of these images in a secured way. Image steganography becomes important in this manner. Steganography and cryptography are the two techniques that are often confused with each other. The input and output of steganogra phy looks alike, but for cryptography the output will be in an encrypted form which always draws attraction to the attacker. This paper combines both steganography and cryptography so that attacker doesn’t know about the existence of message and the message itself is encrypted to ensure more security. The textual data entered by the user is encrypted using AES algorithm. After encryption, the encrypted data is stored in the colour image by using a hash based algorithm. Most of the steganographic algorithms available today is suitable for a specific image format and these algorithms suffers from poor quality of the embedded image. The proposed work does not corrupt the images quality in any form. The striking feature is that this algorithm is suitable for almost all image formats e.g.: jpeg/jpg, Bitmap, TIFF and GIF

  13. Fully De-Amortized Cuckoo Hashing for Cache-Oblivious Dictionaries and Multimaps

    CERN Document Server

    Goodrich, Michael T; Mitzenmacher, Michael; Thaler, Justin

    2011-01-01

    A dictionary (or map) is a key-value store that requires all keys be unique, and a multimap is a key-value store that allows for multiple values to be associated with the same key. We design hashing-based indexing schemes for dictionaries and multimaps that achieve worst-case optimal performance for lookups and updates, with a small or negligible probability the data structure will require a rehash operation, depending on whether we are working in the the external-memory (I/O) model or one of the well-known versions of the Random Access Machine (RAM) model. One of the main features of our constructions is that they are \\emph{fully de-amortized}, meaning that their performance bounds hold without one having to tune their constructions with certain performance parameters, such as the constant factors in the exponents of failure probabilities or, in the case of the external-memory model, the size of blocks or cache lines and the size of internal memory (i.e., our external-memory algorithms are cache oblivious). ...

  14. The Study of Detecting Replicate Documents Using MD5 Hash Function

    Directory of Open Access Journals (Sweden)

    Pushpendra Singh Tomar

    2011-12-01

    Full Text Available A great deal of the Web is replicate or near- replicate content. Documents may be served in different formats: HTML, PDF, and Text for different audiences. Documents may get mirrored to avoid delays or to provide fault tolerance. Algorithms for detecting replicate documents are critical in applications where data is obtained from multiple sources. The removal of replicate documents is necessary, not only to reduce runtime, but also to improve search accuracy. Today, search engine crawlers are retrieving billions of unique URL’s, of which hundreds of millions are replicates of some form. Thus, quickly identifying replicate detection expedites indexing and searching. One vendor’s analysis of 1.2 billion URL’s resulted in 400 million exact replicates found with a MD5 hash. Reducing the collection sizes by tens of percentage point’s results in great savings in indexing time and a reduction in the amount of hardware required to support the system. Last and probably more significant, users benefit by eliminating replicate results. By efficiently presenting only unique documents, user satisfaction is likely to increase.

  15. The Study of Detecting Replicate Documents Using MD5 Hash Functio

    Directory of Open Access Journals (Sweden)

    Mr. Pushpendra Singh Tomar

    2011-09-01

    Full Text Available A great deal of the Web is replicate or near- replicate content. Documents may be served in different formats: HTML, PDF, and Text for different audiences. Documents may get mirrored to avoid delays or to provide fault tolerance. Algorithms for detecting replicate documents are critical in applications where data is obtained from multiple sources. The removal of replicate documents is necessary, not only to reduce runtime, but also to improve search accuracy. Today, search engine crawlers are retrieving billions of unique URL’s, of which hundreds of millions are replicates of some form. Thus, quickly identifying replicate detection expedites indexing and searching. One vendor’s analysis of 1.2 billion URL’s resulted in 400 million exact replicates found with a MD5 hash. Reducing the collection sizes by tens of percentage point’s results in great savings in indexing time and a reduction in the amount of hardware required to support the system. Last and probably more significant, users benefit by eliminating replicate results. By efficiently presenting only unique documents, user satisfaction is likely to increase.

  16. An Efficient Trajectory Data Index Integrating R-tree, Hash and B*-tree

    Directory of Open Access Journals (Sweden)

    GONG Jun

    2015-05-01

    Full Text Available To take into account all of efficiency and query capability, this paper presents a new trajectory data index named HBSTR-tree. In HBSTR-tree, trajectory sample points are collectively stored into trajectory nodes sequentially. Hash table is adopted to index the most recent trajectory nodes of mobile targets, and trajectory nodes will not be inserted into spatio-temporal R-tree until full, which can enhance generation performance in this way. Meantime, one-dimensional index of trajectory nodes in the form of B*-tree is built. Therefore, HBSTR-tree can satisfy both spatio-temporal query and target trajectory query. In order to improve search efficiency, a new criterion for spatio-temporal R-tree and one new node-selection sub-algorithm are put forward, which further optimize insertion algorithm of spatio-temporal R-tree. Furthermore, a database storage scheme for spatio-temporal R-tree is also brought up. Experimental results prove that HBSTR-tree outperforms current methods in several aspects such as generation efficiency, query performance and supported query types, and then supports real-time updates and efficient accesses of huge trajectory database.

  17. An update on the side channel cryptanalysis of MACs based on cryptographic hash functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2007-01-01

    into consideration. Next, we propose new hybrid NMAC/HMAC schemes for security against side channel attacks assuming that their underlying block cipher is ideal. We then show that M-NMAC, MDx-MAC and a variant of the envelope MAC scheme based on DM with an ideal block cipher are secure against DPA attacks.......Okeya has established that HMAC/NMAC implementations based on only Matyas-Meyer-Oseas (MMO) PGV scheme and his two refined PGV schemes are secure against side channel DPA attacks when the block cipher in these constructions is secure against these attacks. The significant result of Okeya's analysis...... is that the implementations of HMAC/NMAC with the Davies-Meyer (DM) compression function based hash functions such as MD5 and SHA-1 are vulnerable to side channel attacks. In this paper, first we show a partial key recovery attack on NMAC/HMAC based on Okeya's two refined PGV schemes by taking practical constraints...

  18. User characteristics and effect profile of Butane Hash Oil: An extremely high-potency cannabis concentrate.

    Science.gov (United States)

    Chan, Gary C K; Hall, Wayne; Freeman, Tom P; Ferris, Jason; Kelly, Adrian B; Winstock, Adam

    2017-09-01

    Recent reports suggest an increase in use of extremely potent cannabis concentrates such as Butane Hash Oil (BHO) in some developed countries. The aims of this study were to examine the characteristics of BHO users and the effect profiles of BHO. Anonymous online survey in over 20 countries in 2014 and 2015. Participants aged 18 years or older were recruited through onward promotion and online social networks. The overall sample size was 181,870. In this sample, 46% (N=83,867) reported using some form of cannabis in the past year, and 3% reported BHO use (n=5922). Participants reported their use of 7 types of cannabis in the past 12 months, the source of their cannabis, reasons for use, use of other illegal substances, and lifetime diagnosis for depression, anxiety and psychosis. Participants were asked to rate subjective effects of BHO and high potency herbal cannabis. Participants who reported a lifetime diagnosis of depression (OR=1.15, p=0.003), anxiety (OR=1.72, pcannabis. BHO users also reported stronger negative effects and less positive effects when using BHO than high potency herbal cannabis (pcannabis. Copyright © 2017. Published by Elsevier B.V.

  19. Analysis of Federal Subsidies: Implied Price of Carbon

    Energy Technology Data Exchange (ETDEWEB)

    D. Craig Cooper; Thomas Foulke

    2010-10-01

    For informed climate change policy, it is important for decision makers to be able to assess how the costs and benefits of federal energy subsidies are distributed and to be able to have some measure to compare them. One way to do this is to evaluate the implied price of carbon (IPC) for a federal subsidy, or set of subsidies; where the IPC is the cost of the subsidy to the U.S. Treasury divided by the emissions reductions it generated. Subsidies with lower IPC are more cost effective at reducing greenhouse gas emissions, while subsidies with a negative IPC act to increase emissions. While simple in concept, the IPC is difficult to calculate in practice. Calculation of the IPC requires knowledge of (i) the amount of energy associated with the subsidy, (ii) the amount and type of energy that would have been produced in the absence of the subsidy, and (iii) the greenhouse gas emissions associated with both the subsidized energy and the potential replacement energy. These pieces of information are not consistently available for federal subsidies, and there is considerable uncertainty in cases where the information is available. Thus, exact values for the IPC based upon fully consistent standards cannot be calculated with available data. However, it is possible to estimate a range of potential values sufficient for initial comparisons. This study has employed a range of methods to generate “first order” estimates for the IPC of a range of federal subsidies using static methods that do not account for the dynamics of supply and demand. The study demonstrates that, while the IPC value depends upon how the inquiry is framed and the IPC cannot be calculated in a “one size fits all” manner, IPC calculations can provide a valuable perspective for climate policy analysis. IPC values are most useful when calculated within the perspective of a case study, with the method and parameters of the calculation determined by the case. The IPC of different policy measures can

  20. A new multivariate Hash algorithm based on improved Merkle-Damg(a)rd construction%改进M-D结构的二次多变量Hash算法

    Institute of Scientific and Technical Information of China (English)

    王尚平; 任姣霞; 张亚玲; 韩照国

    2011-01-01

    As there are some security defects in traditional Hash algorithms, a new Hash algorithm was proposed.This algorithm's security was based on the difficulty of solving large systems of quadratic multivariate polynomial equations over a finite field. An improved Merkle-Damg(a)rd construction was proposed, and Nested MAC's idea was used in the new Hash algorithm; a counter was also added in the construction to resist some attacks to the MerkleDamg(a)rd construction. The output size of the new Hash algorithm is adjustable, aiming to provide different levels of security. The new Hash algorithm is secure against common attacks, and it exhibits a satisfactory avalanche effect.It also has some advantages in memory requirements and running speed compared with previous multivariate Hash algorithms.%针对传统Hash算法有安全缺陷的问题,利用有限域上多变量二次方程组求解(MQ)问题的困难性,设计了一种新的基于有限域上多变量二次多项式的Hash算法.新算法给出了一个改进的M-D结构,采用了NMAC(nested MAC)的 思想,并加入了计数器,旨在抵抗一些针对传统M-D结构的攻击.新算法具有可调的输出参数,可以适应不同程度的安全性需求.新算法可以抵抗常见的攻击,且具有良好的雪崩效应.新算法相对于以往的多变量Hash算法,在内存需求上和运行速度上都有一定的优势.

  1. Secure Hashing of Dynamic Hand Signatures Using Wavelet-Fourier Compression with BioPhasor Mixing and Discretization

    Directory of Open Access Journals (Sweden)

    Wai Kuan Yip

    2007-01-01

    Full Text Available We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT and discrete fourier transform (DFT. Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs of and for random and skilled forgeries for stolen token (worst case scenario, and for both forgeries in the genuine token (optimal scenario.

  2. Refined repetitive sequence searches utilizing a fast hash function and cross species information retrievals

    Directory of Open Access Journals (Sweden)

    Reneker Jeff

    2005-05-01

    Full Text Available Abstract Background Searching for small tandem/disperse repetitive DNA sequences streamlines many biomedical research processes. For instance, whole genomic array analysis in yeast has revealed 22 PHO-regulated genes. The promoter regions of all but one of them contain at least one of the two core Pho4p binding sites, CACGTG and CACGTT. In humans, microsatellites play a role in a number of rare neurodegenerative diseases such as spinocerebellar ataxia type 1 (SCA1. SCA1 is a hereditary neurodegenerative disease caused by an expanded CAG repeat in the coding sequence of the gene. In bacterial pathogens, microsatellites are proposed to regulate expression of some virulence factors. For example, bacteria commonly generate intra-strain diversity through phase variation which is strongly associated with virulence determinants. A recent analysis of the complete sequences of the Helicobacter pylori strains 26695 and J99 has identified 46 putative phase-variable genes among the two genomes through their association with homopolymeric tracts and dinucleotide repeats. Life scientists are increasingly interested in studying the function of small sequences of DNA. However, current search algorithms often generate thousands of matches – most of which are irrelevant to the researcher. Results We present our hash function as well as our search algorithm to locate small sequences of DNA within multiple genomes. Our system applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. We discuss our incorporation of the Gene Ontology (GO database into these algorithms. We conduct an exhaustive time analysis of our system for various repetitive sequence lengths. For instance, a search for eight bases of sequence within 3.224 GBases on 49 different chromosomes takes 1.147 seconds on average. To illustrate the relevance of the search results, we conduct a search with and without added annotation terms for the

  3. 单向Hash函数SHA-256的研究与改进%One-way Hash function research and improved SHA-256

    Institute of Scientific and Technical Information of China (English)

    何润民

    2013-01-01

    This paper focuses on the study of the Hash SHA-256 algorithm,analyzes the logic and the compression function of the SHA-256 algorithm.On the basis of the study,it designs an improved Hash function SHA-256,using VC ++ development tools,completed the software implementation.It verifies the improved Hash function SHA-256 has better nonlinearity,one-way,collision resistance,randommess and avalanche effect by the theoretical analysis,realization of software for the string of text file Hash and comparison of the calculation results.%对Hash函数SHA-256进行了研究,分析了SHA-256的算法逻辑,以及它所采用的压缩函数的构造,在此基础上研究设计了一个改进的Hash函数SHA-256,应用VC++开发工具对改进的Hash函数SHA-256完成了软件实现.利用理论分析和实现软件对字符串、文本文件进行Hash计算结果的比较,结果证实改进的Hash函数具有更好的非线性性、单向性、抗碰撞性、伪随机性和雪崩效应.

  4. 一种针对磁盘完整性校验的增量hash算法%An Incremental Hash Algorithm for Hard Disk Integrity Check

    Institute of Scientific and Technical Information of China (English)

    宋宁楠; 谷大武; 侯方勇

    2009-01-01

    增量hash 函数具有传统迭代hash函数所不具备的增量性和并行性,可以使数据校验值的更新时间与该数据被修改的规模成比例.论文采用增量校验的思想,设计了一种针对磁盘完整性校验的hash函数称为iHash.该文介绍了算法的设计,描述了算法的具体实现,论证了其在抗碰撞问题上的可证明安全性,详细分析了该算法既具有一般增量hash算法的性能优势又具有之前增量hash 设计领域未曾提出的新特性,最后给出了iHash 与已有的hash函数在性能上的对比实验结果.

  5. Comparison and Analysis of Hash Algorithm for Multi-process Load Balancing%面向多进程负载均衡的Hash算法比较与分析

    Institute of Scientific and Technical Information of China (English)

    张莹; 吴和生

    2014-01-01

    Hash算法在高性能多进程负载均衡中起到关键作用,但目前面向多进程负载均衡的Hash算法研究主要集中在Hash算法设计和领域应用方面,较少有文献对现有的Hash算法性能进行分析比较。为此,总结面向多进程负载均衡的Hash算法应具有的特征,并据此筛选出5种适用于多进程负载均衡的主流Hash算法,从分配均衡性和耗时等方面进行理论分析和实验评估,为多进程负载均衡中Hash算法的选择与使用提供依据。分析结果表明, Toeplitz Hash算法较适合用于多进程的负载均衡。%Hash algorithm plays a key role in high performance multi-process load balancing. The study of Hash algorithm for multi-process load balancing is mainly concentrated on the design and application of Hash algorithm,yet analysis and comparative study for the performance of the existing Hash algorithm are fewer. So this paper summarizes the common features that Hash algorithm for multi-process load balancing should have, and screens five major Hash algorithms applied in multi-process load balancing. Theoretical analysis and experimental evaluation about balanced allocation and time-consuming of Hash algorithm provides a foundation for selecting Hash algorithm for multi-process load balancing,and shows that Toeplitz Hash is the best one.

  6. A Homomorphic Hashing Based Provable Data Possession%一种基于同态Hash的数据持有性证明方法

    Institute of Scientific and Technical Information of China (English)

    陈兰香

    2011-01-01

    在云存储服务中,为了让用户可以验证存储服务提供者正确地持有(保存)用户的数据,该文提出一种基于同态hash (homomorphic hashing)的数据持有性证明方法.因为同态hash算法的同态性,两数据块之和的hash值与它们hash值的乘积相等,初始化时存放所有数据块及其hash值,验证时存储服务器返回若干数据块的和及其hash值的乘积,用户计算这些数据块之和的hash值,然后验证是否与其hash值的乘积相等,从而达到持有性证明的目的.在数据生存周期内,用户可以无限次地验证数据是否被正确持有.该方法在提供数据持有性证明的同时,还可以对数据进行完整性保护.用户只需要保存密钥K,约520 byte,验证过程中需要传递的信息少,约18 bit,并且持有性验证时只需要进行一次同态hash运算.文中提供该方法的安全性分析,性能测试表明该方法是可行的.%In cloud storage, in order to allow users to verify that the storage service providers store the user's data intactly. A homomorphic hashing based Provable Data Possession (PDP) method is proposed. Because of the homomorphism of hash algorithm, the hash value of the sum of two blocks is equal to the product of the two hash values. It stores all data blocks and their hash values in setup stage. When the user challenges the storage server, the server returns the sum of the requested data blocks and their hash values. The user computes the hash value of the sum of these data blocks and verifies whether they are equal. In the data lifecycle, the user can perform unlimited number of verification. The method provides provable data possession at the same time it provides integrity protection. Users only need to save a key K, about 520 byte, the information transferred for verification only need about 18 bit, and verification only needs one time hash computation. The security and performance analysis show that the method is feasible.

  7. Performance Analysis of Image Content Identification on Perceptual Hashing%基于感知哈希的图像内容鉴别性能分析

    Institute of Scientific and Technical Information of China (English)

    潘辉; 郑刚; 胡晓惠; 马恒太

    2012-01-01

    感知哈希能有效地区分不同内容的图像,常用于检测盗版或重复的图像内容鉴别应用中.针对已有的对感知哈希的图像内容鉴别应用研究主要集中在算法设计上,缺少对其性能评价的理论方法的问题,提出从理论上评价内容鉴别性能的方法.该方法基于一类图像感知哈希算法的抗碰撞特性建立了内容鉴别的判定模型,并定义内容鉴别的性能公式,最后推导出统计学上的函数作为性能评价指标.实验结果表明,文中方法能很好地估计感知哈希在内容鉴别应用中的性能.%Perceptual hashing can distinguish images of different contents effectively, and it is commonly used in content identification for detecting piracy or duplicate images.Published research of image content identification on perceptual hashing has mainly focused on the design of perceptual hashing algorithm, and very few theoretical methods have been developed for performance evaluation of content identification.Based on anti-collision property of image perceptual hashing, this paper introduces a mathematical model of content identification, defines performance formulas of content identification, and presents functions of the statistical distribution for performance evaluation.Experimental results show that the proposed approach estimates the performance of content identification on perceptual hashing very well.

  8. HAMA-Based Semi-Supervised Hashing Algorithm%基于HAMA的半监督哈希方法

    Institute of Scientific and Technical Information of China (English)

    刘扬; 朱明

    2014-01-01

    In the massive data retrieval applications, hashing-based approximate nearest(ANN) search has become popular due to its computational and memory efficiency for online search. Semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. But the training of hashing function of this framework is so slow due to the large-scale complex training process. HAMA is a Hadoop top-level parallel framework based on Bulk Synchronous Parallel mode (BSP). In this paper, we analyze calculation of adjusted covariance matrix in the training process of SSH, split it into two parts:unsupervised data variance part and supervised pairwise labeled data part, and explore its parallelization. And experiments show the performance and scalability over general commercial hardware and network environment.%在海量数据检索应用中,基于哈希算法的最近邻搜索算法有着很高的计算和内存效率。而半监督哈希算法,结合了无监督哈希算法的正规化信息以及监督算法跨越语义鸿沟的优点,从而取得了良好的结果。但其线下的哈希函数训练过程则非常之缓慢,要对全部数据集进行复杂的训练过程。 HAMA是在Hadoop平台基础上,按照分布式计算BSP模型构建的并行计算框架。本文尝试在HAMA框架基础上,将半监督哈希算法的训练过程中的调整相关矩阵计算过程分解为无监督的相关矩阵部分与监督性的调整部分,分别进行并行计算处理。这使得使得其可以水平扩展在较大规模的商业计算集群上,使得其可以应用于实际应用。实验表明,这种分布式算法,有效提高算法的性能,并且可以进一步应用在大规模的计算集群上。

  9. Implementation analysis of RC5 algorithm on Preneel-Govaerts-Vandewalle (PGV) hashing schemes using length extension attack

    Science.gov (United States)

    Siswantyo, Sepha; Susanti, Bety Hayat

    2016-02-01

    Preneel-Govaerts-Vandewalle (PGV) schemes consist of 64 possible single-block-length schemes that can be used to build a hash function based on block ciphers. For those 64 schemes, Preneel claimed that 4 schemes are secure. In this paper, we apply length extension attack on those 4 secure PGV schemes which use RC5 algorithm in its basic construction to test their collision resistance property. The attack result shows that the collision occurred on those 4 secure PGV schemes. Based on the analysis, we indicate that Feistel structure and data dependent rotation operation in RC5 algorithm, XOR operations on the scheme, along with selection of additional message block value also give impact on the collision to occur.

  10. Indexing Large Visual Vocabulary by Randomized Dimensions Hashing for High Quantization Accuracy: Improving the Object Retrieval Quality

    Science.gov (United States)

    Yang, Heng; Wang, Qing; He, Zhoucan

    The bag-of-visual-words approach, inspired by text retrieval methods, has proven successful in achieving high performance in object retrieval on large-scale databases. A key step of these methods is the quantization stage which maps the high-dimensional image feature vectors to discriminatory visual words. In this paper, we consider the quantization step as the nearest neighbor search in large visual vocabulary, and thus proposed a randomized dimensions hashing (RDH) algorithm to efficiently index and search the large visual vocabulary. The experimental results have demonstrated that the proposed algorithm can effectively increase the quantization accuracy compared to the vocabulary tree based methods which represent the state-of-the-art. Consequently, the object retrieval performance can be significantly improved by our method in the large-scale database.

  11. SECOM: A novel hash seed and community detection based-approach for genome-scale protein domain identification

    KAUST Repository

    Fan, Ming

    2012-06-28

    With rapid advances in the development of DNA sequencing technologies, a plethora of high-throughput genome and proteome data from a diverse spectrum of organisms have been generated. The functional annotation and evolutionary history of proteins are usually inferred from domains predicted from the genome sequences. Traditional database-based domain prediction methods cannot identify novel domains, however, and alignment-based methods, which look for recurring segments in the proteome, are computationally demanding. Here, we propose a novel genome-wide domain prediction method, SECOM. Instead of conducting all-against-all sequence alignment, SECOM first indexes all the proteins in the genome by using a hash seed function. Local similarity can thus be detected and encoded into a graph structure, in which each node represents a protein sequence and each edge weight represents the shared hash seeds between the two nodes. SECOM then formulates the domain prediction problem as an overlapping community-finding problem in this graph. A backward graph percolation algorithm that efficiently identifies the domains is proposed. We tested SECOM on five recently sequenced genomes of aquatic animals. Our tests demonstrated that SECOM was able to identify most of the known domains identified by InterProScan. When compared with the alignment-based method, SECOM showed higher sensitivity in detecting putative novel domains, while it was also three orders of magnitude faster. For example, SECOM was able to predict a novel sponge-specific domain in nucleoside-triphosphatase (NTPases). Furthermore, SECOM discovered two novel domains, likely of bacterial origin, that are taxonomically restricted to sea anemone and hydra. SECOM is an open-source program and available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2012 Fan et al.

  12. Research on Application of Linear Hash in Full-text Retrieval%线性散列在全文检索中的应用研究

    Institute of Scientific and Technical Information of China (English)

    束文杰; 时亚南; 于国欣

    2015-01-01

    散列表是一种常见的数据结构,理论上它能以常数级时间复杂度O (1)执行查询操作,因而在计算机技术中具有广泛的应用。在大规模用户并发向全文检索系统请求数据的情况下,系统会出现响应速度慢以及检索效率低等问题。为解决上述问题,引入了动态散列技术—线性散列,结合全文检索系统的实际需要,提出了一种分块式线性散列倒排索引的构建方法,并详细阐述了该线性散列索引的索引结构、存储方式、设计思路和实现细节。经大量实验测试,基于线性散列的倒排索引具有极快的响应速度,明显提高了全文检索的查询性能。%Hash table is a common data structure,and theoretically it can execute the query operation in a constant level time complexity O (1),so it has a wide application in the computer technology. Under the circumstances that large-scale concurrent users try to request data from the full-text retrieval system,the system will be slow to respond and retrieve in low efficiency. In order to solve these prob-lems,introduce a dynamic hashing technique—linear hash. Combined with the full-text retrieval system’ s actual needs,propose a method of block inverted index built on linear hash,and elaborate the linear hash index’ s index structure,storage pattern,design ideas and imple-mentation details. After a large number of experimental tests, the inverted index based on linear hash has an extremely fast response speed,and significantly improves the full-text retrieval’ s query performance.

  13. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pedersen, David Sloth; Pisani, Camilla

    2016-01-01

    of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

  14. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...... of the implied volatility of variance smile -- some clearly at odds with the upward-sloping volatility skew observed in variance markets....

  15. Implied and Realized Volatility in the Cross-Section of Equity Options

    DEFF Research Database (Denmark)

    Ammann, Manuel; Skovmand, David; Verhofen, Michael

    2009-01-01

    volatilities after controlling for historical volatility. We find evidence that implied volatility overestimates realized volatility for low-beta stocks, small caps, low-market-to-book stocks, and stocks with no momentum and vice versa. However, we cannot reject the null hypothesis that implied volatility......Using a complete sample of US equity options, we analyze patterns of implied volatility in the cross-section of equity options with respect to stock characteristics. We find that high-beta stocks, small stocks, stocks with a low-market-to-book ratio, and non-momentum stocks trade at higher implied...

  16. Implied and Realized Volatility in the Cross-Section of Equity Options

    DEFF Research Database (Denmark)

    Ammann, Manuel; Skovmand, David; Verhofen, Michael

    2009-01-01

    Using a complete sample of US equity options, we analyze patterns of implied volatility in the cross-section of equity options with respect to stock characteristics. We find that high-beta stocks, small stocks, stocks with a low-market-to-book ratio, and non-momentum stocks trade at higher implied...... volatilities after controlling for historical volatility. We find evidence that implied volatility overestimates realized volatility for low-beta stocks, small caps, low-market-to-book stocks, and stocks with no momentum and vice versa. However, we cannot reject the null hypothesis that implied volatility...

  17. Adaptation to real motion reveals direction-selective interactions between real and implied motion processing.

    Science.gov (United States)

    Lorteije, Jeannette A M; Kenemans, J Leon; Jellema, Tjeerd; van der Lubbe, Rob H J; Lommers, Marjolein W; van Wezel, Richard J A

    2007-08-01

    Viewing static pictures of running humans evokes neural activity in the dorsal motion-sensitive cortex. To establish whether this response arises from direction-selective neurons that are also involved in real motion processing, we measured the visually evoked potential to implied motion following adaptation to static or moving random dot patterns. The implied motion response was defined as the difference between evoked potentials to pictures with and without implied motion. Interaction between real and implied motion was found as a modulation of this difference response by the preceding motion adaptation. The amplitude of the implied motion response was significantly reduced after adaptation to motion in the same direction as the implied motion, compared to motion in the opposite direction. At 280 msec after stimulus onset, the average difference in amplitude reduction between opposite and same adapted direction was 0.5 muV on an average implied motion amplitude of 2.0 muV. These results indicate that the response to implied motion arises from direction-selective motion-sensitive neurons. This is consistent with interactions between real and implied motion processing at a neuronal level.

  18. 基于哈希函数的数据库查询技术的研究%Research on Database Query Technology Based on Hash Function

    Institute of Scientific and Technical Information of China (English)

    贾丹; 佟玉军; 陈文实

    2012-01-01

    分析了目前常用的哈希函数和数据库传统的查找方式存在的不足,提出了一种新的思想,将哈希函数与数据库设计相结合,直接定位到所要查找的记录,以提高查找效率。%Commonly used hash functions and shortcoming of database traditional search method were analyzed, a new idea was proposed, which combines hash function with database design,locates the record that will be searched directly and to the effect improves search efficiency.

  19. 基于一致性Hash算法的分布式缓存数据冗余%Redundancy of Distributed Cache Data Based on Consistent Hash Algorithm

    Institute of Scientific and Technical Information of China (English)

    李宁

    2016-01-01

    为了优化大型分布式网站中的数据缓存机制 ,提出基于一致性 Hash算法的缓存数据冗余机制.分析不同散列函数性能 ,使数据能均匀分布在Hash环上不同节点 ,使用二分法在主从 Hash环上分别进行存取缓存数据.本地测试及结果分析表明 ,该冗余机制明显优于直接读库和单机缓存 ,在分布式系统中能有效降低冗余操作带来的性能损耗 ,提高了网站的健壮性和稳定性 ,为高并发、分布式缓存系统设计提供了一个新的思路.%In order to optimize the data caching mechanism in large distributed sites ,this paper propose the redundancy of cache data based on Consistent Hash Algorithm ,analyze the performance of different hash functions to distribute the data on the nodes of hash ring evenly .Set and get data in Master-Slave hash rings respectively in use of dichotomy .Through local testing and analysis ,we found that the redundancy mechanism is significantly better than reading the database direct-ly or local cache ,the performance loss of backup operations can be effectively reduced in distributed system .Therefore this mechanism can improve the robustness and stability of the site ,and provide new ideas to the design of high concurren-cy ,distributed cache system .

  20. Image Hash based on discrete curvelet transform%基于离散曲波变换的图像Hash算法

    Institute of Scientific and Technical Information of China (English)

    徐文娟; 易波

    2011-01-01

    为了提高图像Hash算法的鲁棒性,提出一种新的基于离散曲波变换的图像Hash算法.该算法首先对图像预处理,再进行基于“打包”的快速离散曲波变换,提取出包含图像主要特征的曲波低频系数和边缘信息较丰富的细节2层系数作为特征向量;然后用Logistic方程对特征向量加密;最后进行量化压缩得到图像Hash序列.实验结果表明,该算法与已有传统算法相比,具有更高的鲁棒性;能有效区分不同图像,具有易碎性;混沌系统的引入使算法具有安全性.%In order to improve the robustness of image Hash algorithm a new image Hash algorithm based on discrete curvelet transform is proposed. The image is firstly preprocessed, and then decomposed with discrete curvelet transform via wrapping. The curvelet coefficients of low frequency contained the main features of image and the coefficient of details of two layer contained rich edge information are as the feature vectors. And Logistic equation is used to encrypt the eigenvector. Finally, the image Hash sequence is obtained by quantization and compression. Experimental results show that the algorithm has better robustness compared to some other Hash method. It is fragility to different images. The chaos system enhances the security.

  1. A Contribution to Secure the Routing Protocol "Greedy Perimeter Stateless Routing" Using a Symmetric Signature-Based AES and MD5 Hash

    CERN Document Server

    Erritali, Mohammed; Ouahidi, Bouabid El; 10.5121/ijdps.2011.2509

    2011-01-01

    This work presents a contribution to secure the routing protocol GPSR (Greedy Perimeter Stateless Routing) for vehicular ad hoc networks, we examine the possible attacks against GPSR and security solutions proposed by different research teams working on ad hoc network security. Then, we propose a solution to secure GPSR packet by adding a digital signature based on symmetric cryptography generated using the AES algorithm and the MD5 hash function more suited to a mobile environment.

  2. "It's All Coming Together": An Encounter between Implied Reader and Actual Reader in the Australian Rainforest

    Science.gov (United States)

    Williams, Sandra J.

    2008-01-01

    In this paper I discuss how taking a particular literary theory--the implied reader--serves to offer a focus for the teacher's initial reading of a text and provides a formative assessment tool. Iser's Implied Reader theory is discussed, after which a picture book, "Where the Forest Meets the Sea" by Jeannie Baker, is analysed from this…

  3. Fractional Black–Scholes option pricing, volatility calibration and implied Hurst exponents in South African context

    Directory of Open Access Journals (Sweden)

    Emlyn Flint

    2017-03-01

    Full Text Available Background: Contingent claims on underlying assets are typically priced under a framework that assumes, inter alia, that the log returns of the underlying asset are normally distributed. However, many researchers have shown that this assumption is violated in practice. Such violations include the statistical properties of heavy tails, volatility clustering, leptokurtosis and long memory. This paper considers the pricing of contingent claims when the underlying is assumed to display long memory, an issue that has heretofore not received much attention. Aim: We address several theoretical and practical issues in option pricing and implied volatility calibration in a fractional Black–Scholes market. We introduce a novel eight-parameter fractional Black–Scholes-inspired (FBSI model for the implied volatility surface, and consider in depth the issue of calibration. One of the main benefits of such a model is that it allows one to decompose implied volatility into an independent long-memory component – captured by an implied Hurst exponent – and a conditional implied volatility component. Such a decomposition has useful applications in the areas of derivatives trading, risk management, delta hedging and dynamic asset allocation. Setting: The proposed FBSI volatility model is calibrated to South African equity index options data as well as South African Rand/American Dollar currency options data. However, given the focus on the theoretical development of the model, the results in this paper are applicable across all financial markets. Methods: The FBSI model essentially combines a deterministic function form of the 1-year implied volatility skew with a separate deterministic function for the implied Hurst exponent, thus allowing one to model both observed implied volatility surfaces as well as decompose them into independent volatility and long-memory components respectively. Calibration of the model makes use of a quasi-explicit weighted

  4. Improved one-way hash chain and revocation polynomial-based self-healing group key distribution schemes in resource-constrained wireless networks.

    Science.gov (United States)

    Chen, Huifang; Xie, Lei

    2014-12-18

    Self-healing group key distribution (SGKD) aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC) and the revocation polynomial (RP), the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked.

  5. Improved One-Way Hash Chain and Revocation Polynomial-Based Self-Healing Group Key Distribution Schemes in Resource-Constrained Wireless Networks

    Directory of Open Access Journals (Sweden)

    Huifang Chen

    2014-12-01

    Full Text Available Self-healing group key distribution (SGKD aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC and the revocation polynomial (RP, the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked.

  6. 多格式音频感知哈希算法%Perceptual Hashing Algorithm for Multi-Format Audio

    Institute of Scientific and Technical Information of China (English)

    张秋余; 省鹏飞; 黄羿博; 董瑞洪; 杨仲平

    2016-01-01

    提出一种基于双树复小波变换的多格式音频感知哈希算法,解决了现有音频认证算法音频格式单一、算法不通用、效率低的问题.首先对预处理后的音频信号进行全局双树复小波变换,获得信号的实小波和复小波系数,对它们分别分帧,帧数相同;对实小波系数计算每帧信号Teager能量算子的模值,作为实小波系数的帧间特征,接着对每帧信号再分帧,提取再分帧帧信号的短时能量作为实小波系数的帧内特征;对复小波系数求取每帧信号的熵值作为复小波系数的帧间特征;最后对上述特征分别进行哈希构造,生成感知哈希序列.实验结果表明,该算法对5种不同格式的音频都具有强鲁棒性,且区分性好,效率高,并能实现小范围篡改检测.%A novel multi-format audio perceptual hashing algorithm based on dual tree complex wavelet transform ( DT-CWT ) was proposed. It solves the problems of the existing audio authentication algo-rithms, including that audio files are kept in a single format, and algorithms are not generic and low effi-ciency. The proposed algorithm first applies the global DT-CWT to the audio signal after pre-processing conducts to obtain the real and complex wavelet coefficients. Next, the coefficients are partitioned in some frames respectively, and the frame numbers are same. For the real wavelet coefficients, the module values of teager energy operator in every frame are computed to serve as its inter-frame feature. And then short-time energy of the new signal, which is generated to frame the frame signal, is computed to serve as its intra-frame feature. For the complex wavelet coefficients, entropy values are obtained in every frame to serve as its inter-frame feature. Finally, the above features are to conduct a hashing structure process to produce the perceptual hashing sequence. Experiments show that the proposed algorithm has good robust-ness and discrimination for audio

  7. CRYPTANALYSIS OF HASH FUNCTIONS BASED ON CHAOTIC SYSTEM%基于混沌的Hash函数的安全性分析

    Institute of Scientific and Technical Information of China (English)

    谭雪; 周琥; 王世红

    2016-01-01

    With the development of modern cryptology,hash functions play an increasingly important role.In this paper,we analyse the security of two hash algorithms,one is a parallel hash function construction based on coupled map lattice,the other is the keyed serial hash function based on a dynamic lookup table.For the former,we find that the coupled map lattice leads to a structural defect in the algorithm. Under the condition of block index and block message meeting specific constraint,without the complicated computation it is able to directly give the intermediate hash value of the specific block index and block message.For the latter,we analyse the constraint condition of the state of a buffer that the collision is produced.Under this condition,the cost of output collisions of the algorithm found is O (2 100 ),much higher than that of the birthday attack.%随着现代密码学的发展,Hash函数算法越来越占有重要的地位。针对基于耦合映像格子的并行Hash函数算法和带密钥的基于动态查找表的串行Hash函数算法进行了安全性分析。对于前者,发现耦合映像格子系统导致算法中存在一种结构缺陷,在分组序号和分组消息满足特定约束关系的条件下,无需复杂的计算可以直接给出特定分组和消息的中间Hash值。对于后者,分析了产生碰撞缓存器状态的约束条件。在此条件下,找到算法的输出碰撞的代价为O (2100),远大于生日攻击的代价。

  8. Dynamic co-movements of stock market returns, implied volatility and policy uncertainty

    OpenAIRE

    Antonakakis, N.; Chatziantoniou, I.; Filis, George

    2013-01-01

    We examine time-varying correlations among stock market returns, implied volatility and policy uncertainty. Our findings suggest that correlations are indeed time-varying and sensitive to oil demand shocks and US recessions. Highlights: We examine dynamic correlations of stock market returns, implied volatility and policy uncertainty. Dynamic correlations reveal heterogeneous patterns during US recessions. Aggregate demand oil price shocks and US recessions affect dynamic correlations. A rise...

  9. Level Shifts in Volatility and the Implied-Realized Volatility Relation

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; de Magistris, Paolo Santucci

    to the multivariate case of the univariate level shift technique by Lu and Perron (2008). An application to the S&P500 index and a simulation experiment show that the recently documented empirical properties of strong persistence in volatility and forecastability of future realized volatility from current implied...... volatility, which have been interpreted as long memory (or fractional integration) in volatility and fractional cointegration between implied and realized volatility, are accounted for by occasional common level shifts....

  10. Market-implied risk-neutral probabilities, actual probabilities, credit risk and news

    Directory of Open Access Journals (Sweden)

    Shashidhar Murthy

    2011-09-01

    Full Text Available Motivated by the credit crisis, this paper investigates links between risk-neutral probabilities of default implied by markets (e.g. from yield spreads and their actual counterparts (e.g. from ratings. It discusses differences between the two and clarifies underlying economic intuition using simple representations of credit risk pricing. Observed large differences across bonds in the ratio of the two probabilities are shown to imply that apparently safer securities can be more sensitive to news.

  11. An improved three party authenticated key exchange protocol using hash function and elliptic curve cryptography for mobile-commerce environments

    Directory of Open Access Journals (Sweden)

    S.K. Hafizul Islam

    2017-07-01

    Full Text Available In the literature, many three-party authenticated key exchange (3PAKE protocols are put forwarded to established a secure session key between two users with the help of trusted server. The computed session key will ensure secure message exchange between the users over any insecure communication networks. In this paper, we identified some deficiencies in Tan’s 3PAKE protocol and then devised an improved 3PAKE protocol without symmetric key en/decryption technique for mobile-commerce environments. The proposed protocol is based on the elliptic curve cryptography and one-way cryptographic hash function. In order to prove security validation of the proposed 3PAKE protocol we have used widely accepted AVISPA software whose results confirm that the proposed protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The proposed protocol is not only secure in the AVISPA software, but it also secure against relevant numerous security attacks such as man-in-the-middle attack, impersonation attack, parallel attack, key-compromise impersonation attack, etc. In addition, our protocol is designed with lower computation cost than other relevant protocols. Therefore, the proposed protocol is more efficient and suitable for practical use than other protocols in mobile-commerce environments.

  12. Employing Ontology-Alignment and Locality-Sensitive Hashing to Improve Attribute Interoperability in Federated eID Systems

    Directory of Open Access Journals (Sweden)

    Walter Priesnitz Filho

    2016-10-01

    Full Text Available Achieving interoperability, i.e. creating identity federations between different Electronic identities (eID systems, has gained relevance throughout the past years. A serious problem of identity federations is the missing harmonization between various attribute providers (APs. In closed eID systems, ontologies allow a higher degree of automation in the process of aligning and aggregating attributes from different APs. This approach does not work for identity federations, as each eID system uses its own ontology to represent its attributes. Furthermore, providing attributes to intermediate entities required to align and aggregate attributes potentially violates privacy rules. To tackle these problems, we propose the use of combined ontology-alignment (OA approaches and locality-sensitive hashing (LSH functions. We assess existing implementations of these concepts defining and using criteria that are special for identity federations. Obtained results confirm that proper implementations of these concepts exist and that they can be used to achieve interoperability between eID systems on attribute level. A prototype is implemented showing that combining the two assessment winners (AlignAPI for ontology-alignment and Nilsimsa for LSH functions achieves interoperability between eID systems. In addition, the improvement obtained in the alignment process by combining the two assessment winners does not impact negatively the privacy of the user’s data, since no clear-text data is exchanged in the alignment process.

  13. Motor mapping of implied actions during perception of emotional body language.

    Science.gov (United States)

    Borgomaneri, Sara; Gazzola, Valeria; Avenanti, Alessio

    2012-04-01

    Perceiving and understanding emotional cues is critical for survival. Using the International Affective Picture System (IAPS) previous TMS studies have found that watching humans in emotional pictures increases motor excitability relative to seeing landscapes or household objects, suggesting that emotional cues may prime the body for action. Here we tested whether motor facilitation to emotional pictures may reflect the simulation of the human motor behavior implied in the pictures occurring independently of its emotional valence. Motor-evoked potentials (MEPs) to single-pulse TMS of the left motor cortex were recorded from hand muscles during observation and categorization of emotional and neutral pictures. In experiment 1 participants watched neutral, positive and negative IAPS stimuli, while in experiment 2, they watched pictures depicting human emotional (joyful, fearful), neutral body movements and neutral static postures. Experiment 1 confirms the increase in excitability for emotional IAPS stimuli found in previous research and shows, however, that more implied motion is perceived in emotional relative to neutral scenes. Experiment 2 shows that motor excitability and implied motion scores for emotional and neutral body actions were comparable and greater than for static body postures. In keeping with embodied simulation theories, motor response to emotional pictures may reflect the simulation of the action implied in the emotional scenes. Action simulation may occur independently of whether the observed implied action carries emotional or neutral meanings. Our study suggests the need of controlling implied motion when exploring motor response to emotional pictures of humans. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Modeling the Implied Volatility Surface-: A Study for S&P 500 Index Option

    Directory of Open Access Journals (Sweden)

    Jin Zheng

    2013-02-01

    Full Text Available The aim of this study is to demonstrate a framework to model the implied volatilities of S&P 500 index options and estimate the implied volatilities of stock prices using stochastic processes. In this paper, three models are established to estimate whether the implied volatilities are constant during the whole life of options. We mainly concentrate on the Black-Scholes and Dumas’ option models and make the empirical comparisons. By observing the daily-recorded data of S&P 500 index, we study the volatility model and volatility surface. Results from numerical experiments show that the stochastic volatilities are determined by moneyness rather than constant. Our research is of vital importance, especially for forecasting stock market shocks and crises, as one of the applications.

  15. Image Hashing algorithm based on stacked autoencoder%基于栈式自动编码的图像哈希算法

    Institute of Scientific and Technical Information of China (English)

    张春雨; 韩立新; 徐守晶

    2016-01-01

    随着网络图像的快速发展,在大型图像检索系统中哈希算法成为近似最近邻查询算法的研究重点。本文提出一种基于深度模型的哈希算法—深度哈希。通过深度卷积神经网络提取的图像高维全局特征,用栈式自动编码器对特征进行无监督学习得到二进制哈希编码,利用图像标签语义相似性对栈式自动编码器的参数进行微调,最后用汉明距离来计算图像的相似性。本文提出的深度哈希在图像检索中取得了较好的结果。%With the rapid development of network in the large image ,image hashing algorithm has attracted interests as an approach of approximate nearest neighbor algorithm in the image retrieval system .In this paper ,we proposed the deep hash which based on deep learning models .The high dimensional global are extracted by deep convolutional neural network ,then using stack autoencoder to get the parameters of the models by unsupervised learning to get the binary hash code .Finally using the hamming distance to compute the similarity of the images .The deephash proves the better results in image retrieval .

  16. 基于并行和变参数的混沌hash函数的构造与性能分析%Design and performance analysis of parallel chaos-based hash function with changeable parameter

    Institute of Scientific and Technical Information of China (English)

    冯艳茹; 李艳涛; 肖迪

    2011-01-01

    提出了一种基于可并行和变参数的混沌分段线性映射hash函数算法.该函数通过明文扩展将并行处理的明文消息矩阵元素信息关联起来,实现了并行性.由矩阵元素位置标号决定的可变参数和矩阵元素相应的ASCⅡ码值分别作为混沌分段线性映射的输入参数和迭代次数来生成相应明文的中间hash值.最终的128bit的hash值由中间hash值的异或而得到.计算机模拟表明,本算法具有较好的单向性、混乱、扩散性以及抗碰撞性,满足单向hash函数的各项性能要求.%This paper proposed a parallel chaos-based Hash function construction with changeable parameter. Implemented the parallelism of the hash function by message expansion which associated elements in plain message matrix. Generated the intermediate hash values by iterating chaotic piecewise linear map with changeable parameter decided by the position index of elements of message matrix and corresponding ASCII code values of elements of message matrix as the iteration times of the map. Obtained the final 128-bit hash value by logical XOR operation on intermediate hash values. Simulation results indicate that the algorithm has characteristics of one way, confusion and diffusivity, and collision-resistance, and it can satisfy various performance requirements of hash function.

  17. Does “quorum sensing” imply a new type of biological information?

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio

    2002-01-01

    of biological information implied by genetic information with that implied in the concept of “quorum sensing” (which refers to a prokaryotic cell-to-cell communication system) in order to explore if such integration is being achieved. I use the Lux operon paradigm and the Vibrio fischeri – Euprymna scolopes...... the different emergent levels. I also emphasise that the realisation of biology as being a “science of sensing” and the new importance that is being ascribed to the “context” in experimental biology corroborate past claims of biosemioticians about a shift from a focus on information (as a material agent...

  18. BCL::EM-Fit: rigid body fitting of atomic structures into density maps using geometric hashing and real space refinement.

    Science.gov (United States)

    Woetzel, Nils; Lindert, Steffen; Stewart, Phoebe L; Meiler, Jens

    2011-09-01

    Cryo-electron microscopy (cryoEM) can visualize large macromolecular assemblies at resolutions often below 10Å and recently as good as 3.8-4.5 Å. These density maps provide important insights into the biological functioning of molecular machineries such as viruses or the ribosome, in particular if atomic-resolution crystal structures or models of individual components of the assembly can be placed into the density map. The present work introduces a novel algorithm termed BCL::EM-Fit that accurately fits atomic-detail structural models into medium resolution density maps. In an initial step, a "geometric hashing" algorithm provides a short list of likely placements. In a follow up Monte Carlo/Metropolis refinement step, the initial placements are optimized by their cross correlation coefficient. The resolution of density maps for a reliable fit was determined to be 10 Å or better using tests with simulated density maps. The algorithm was applied to fitting of capsid proteins into an experimental cryoEM density map of human adenovirus at a resolution of 6.8 and 9.0 Å, and fitting of the GroEL protein at 5.4 Å. In the process, the handedness of the cryoEM density map was unambiguously identified. The BCL::EM-Fit algorithm offers an alternative to the established Fourier/Real space fitting programs. BCL::EM-Fit is free for academic use and available from a web server or as downloadable binary file at http://www.meilerlab.org.

  19. Hash Dijkstra Algorithm for Approximate Minimal Spanning Tree%近似最小树的哈希Dijkstra算法

    Institute of Scientific and Technical Information of China (English)

    李玉鑑; 李厚君

    2011-01-01

    为了解决Dijkstra(DK)算法对大规模数据构造最小树时效率不高的问题,结合局部敏感哈希映射(LSH),针对欧氏空间中的样本,提出了一种近似最小树的快速生成算法,即LSHDK算法.该算法通过减少查找近邻点的计算量提高运行速度.计算实验结果表明,当数据规模大于50000个点时,LSHDK算法比DK算法速度更快且所计算的近似最小树在维数较低时误差非常小(0.00—0.05%),在维数较高时误差通常为0.1%~3.0%.%In order to overcome the low efficiency of Dijkstra (DK) algorithm in constructing Minimal Spanning Trees (MST) for large-scale datasets, this paper uses Locality Sensitive Hashing (LSH) to design a fast approximate algorithm, namely, LSHDK algorithm, to build MST in Euclidean space. The LSHDK algorithm can achieve a faster speed with small error by reducing computations in search of nearest points. Computational experiments show that it runs faster than the DK algorithm on datasets of more than 50 000 points, while the resulting approximate MST has an small error which is very small (0.00 - 0.05% ) in low dimension, and generally between 0. 1% -3.0% in high dimension.

  20. Research of incremental extraction based on MD5 and HASH algorithm%融入MD5的HASH线性获取增量算法研究

    Institute of Scientific and Technical Information of China (English)

    郭亮; 杨金民

    2014-01-01

    To achieve rapid incremental extraction of database, an algorithm which is blended MD5 in HASH linear scan-ning to obtain increment is put forward based on analyzing the traditional incremental extraction. Each record in database can be seen as a character string and it can be generated into hash table as duplicate record, which is explored in hash table through traditional record to obtain increment and decrease frequency of comparison. Meanwhile, the fingerprint of each record can be generated with using MD5 algorithm, which reduces the length of character string in every HASH algorithm and comparison and improves efficiency. This algorithm is applicably tested in ORACLE database and the result shows that it is improved on calculative efficiency at a large extent compared with traditional algorithm.%为了实现数据库中的快速增量提取,在剖析传统的增量提取方法上,提出了一种融入MD5的HASH线性扫描来获取增量的算法。数据库中的每条记录都可视为一个字符串,利用HASH算法生成备份记录的散列表,通过原始记录去散列表中探测来达到线性扫描就能获取增量的目的,减少了比对次数;同时利用MD5算法生成每条记录的“指纹”,降低了每次HASH运算和比对的字符串长度,提高了效率。对所提出算法在ORACLE数据库上进行了应用测试,结果表明该算法效率较传统方法有很大提高。

  1. Implied Volatility of Interest Rate Options: An Empirical Investigation of the Market Model

    DEFF Research Database (Denmark)

    Christiansen, Charlotte; Hansen, Charlotte Strunk

    2002-01-01

    We analyze the empirical properties of the volatility implied in options on the 13-week US Treasury bill rate. These options have not been studied previously. It is shown that a European style put option on the interest rate is equivalent to a call option on a zero-coupon bond. We apply the LIBOR...

  2. Implied Volatility of Interest Rate Options: An Empirical Investigation of the Market Model

    DEFF Research Database (Denmark)

    Christiansen, Charlotte; Hansen, Charlotte Strunk

    2002-01-01

    We analyze the empirical properties of the volatility implied in options on the 13-week US Treasury bill rate. These options have not been studied previously. It is shown that a European style put option on the interest rate is equivalent to a call option on a zero-coupon bond. We apply the LIBOR...

  3. 77 FR 2056 - Merrimac Paper Company, Inc.; Notice of Termination of License by Implied Surrender and...

    Science.gov (United States)

    2012-01-13

    ... Energy Regulatory Commission Merrimac Paper Company, Inc.; Notice of Termination of License by Implied... Surrender. b. Project No.: 2928-007. c. Date Initiated: January 06, 2012. d. Licensee: Merrimac Paper... documents may be filed electronically via the Internet in lieu of paper. See 18 CFR 385.2001(a)(1)(iii) and...

  4. An Assessment of Behavioural Variables Implied in Teamwork: An Experience with Engineering Students of Zaragoza University

    Science.gov (United States)

    Fernandez, Juan Luis Cano; Lopez, Ivan Lidon; Rubio, Ruben Rebollar; Marco, Fernando Gimeno

    2009-01-01

    This paper presents a study of behavioural variables implied in the working dynamics of student groups undertaking their first project. The study was carried out in two phases. During the first phase, the participants answered a survey of open questions regarding their own behaviour and that of their teammates, questions related to: the quality of…

  5. Asymptotic Nilpotency Implies Nilpotency in Cellular Automata on the d-Dimensional Full Shift

    CERN Document Server

    Salo, Ville

    2012-01-01

    We prove a conjecture in [3] by showing that cellular automata that eventually fix all cells to a fixed symbol 0 are nilpotent on S^{\\Z^d} for all d. We also briefly discuss nilpotency on other subshifts, and show that weak nilpotency implies nilpotency in all subshifts and all dimensions, since we do not know a published reference for this.

  6. Implied Motion Activation in Cortical Area MT Can Be Explained by Visual Low-level Features

    NARCIS (Netherlands)

    Lorteije, Jeannette A.M.; Barraclough, Nick E.; Jellema, Tjeerd; Raemaekers, Mathijs; Duijnhouwer, Jacob; Xiao, Dengke; Oram, Mike W.; Lankheet, Martin J.M.; Perrett, David I.; van Wezel, Richard Jack Anton

    To investigate form-related activity in motion-sensitive cortical areas, we recorded cell responses to animate implied motion in macaque middle temporal (MT) and medial superior temporal (MST) cortex and investigated these areas using fMRI in humans. In the single-cell studies, we compared responses

  7. Adaptation to real motion reveals direction-selective interactions between real and implied motion processing

    NARCIS (Netherlands)

    Lorteije, J.A.M.; Kenemans, J.L.; Jellema, T.; Lubbe, R.H.J. van der; Lommers, M.W.; Wezel, R.J.A. van

    2007-01-01

    Viewing static pictures of running humans evokes neural activity in the dorsal motion-sensitive cortex. To establish whether this response arises from direction-selective neurons that are also involved in real motion processing, we measured the visually evoked potential to implied motion following a

  8. Delayed response to animate implied motion in human motion processing areas

    NARCIS (Netherlands)

    Lorteije, J.A.M.; Kenemans, J.L.; Jellema, T.; Lubbe, R.H.J. van der; Heer, F. de; Wezel, R.J.A. van

    2006-01-01

    Viewing static photographs of objects in motion evokes higher fMRI activation in the human medial temporal complex (MT+) than looking at similar photographs without this implied motion. As MT+ is traditionally thought to be involved in motion perception (and not in form perception), this finding sug

  9. Delayed Response to Animate Implied Motion in Human Motion Processing Areas

    NARCIS (Netherlands)

    Lorteije, Jeannette A.M.; Kenemans, J. Leon; Jellema, Tjeerd; Lubbe, van der Rob H.J.; Heer, de Frederiek; Wezel, van Richard J.A.

    2006-01-01

    Viewing static photographs of objects in motion evokes higher fMRI activation in the human medial temporal complex (MT+) than looking at similar photographs without this implied motion. As MT+ is traditionally thought to be involved in motion perception (and not in form perception), this finding sug

  10. Adaptation to Real Motion Reveals Direction-selective Interactions between Real and Implied Motion Processing

    NARCIS (Netherlands)

    Lorteije, Jeannette A.M.; Kenemans, Leon; Jellema, Tjeerd; Lubbe, van der Rob H.J.; Lommers, Marjolein W.; Wezel, van Richard J.A.

    2007-01-01

    Viewing static pictures of running humans evokes neural activity in the dorsal motion-sensitive cortex. To establish whether this response arises from direction-selective neurons that are also involved in real motion processing, we measured the visually evoked potential to implied motion following a

  11. Aplikasi Algoritma Biseksi dan Newton-Raphson dalam Menaksir Nilai Volatilitas Implied

    Directory of Open Access Journals (Sweden)

    Komang Dharmawan

    2012-11-01

    Full Text Available Volatilitas adalah suatu besaran yang mengukuran seberapa jauh suatu harga sahambergerak dalam suatu periode tertentu dapat juga diartikan sebagai persentase simpanganbaku dari perubahan harga harian suatu saham. Menurut teori yang dikembangkan oleh Black-Scholes in 1973, semua harga opsi dengan ’underlying asset’ dan waktu jatuh tempo yang samatetapi memiliki nilai exercise yang berbeda akan memiliki nilai volatilitas implied yang sama.Model Black-Scholes dapat dipakai mengestimasi nilai volatilitas implied dari suatu sahamdengan mencari sulusi numerik dari persamaan invers dari model Black-Scholes. Makalah inimendemonstrasikan bagaimana menghitung nilai volatilitas implied suatu saham dengan mengasumsikanbahwa model Black-schole adalah benar dan suatu kontrak opsi dengan denganumur kontrak yang sama akan memiliki harga yang sama. Menggunakan data harga opsi SonyCorporation (SNE, Cisco Systems, Inc (CSCO, dan Canon, Inc (CNJ diperoleh bahwa, ImpliedVolatility memberikan harga yang lebih murah dibandingkan dengan harga opsi darivolatilitas yang dihitung dari data historis. Selain itu, dari hasil iterasi yang diperoleh, metodeNewton-Raphson lebih cepat konvergen dibandingkan dengan metode Bisection.

  12. 一种基于Hash函数的RFID认证改进协议%An improved hash-based RFID security authentication algorithm

    Institute of Scientific and Technical Information of China (English)

    王旭宇; 景凤宣; 王雨晴

    2014-01-01

    针对使用无线射频识别技术(RFID)进行认证时存在的安全问题,提出了一种结合Hash函数与时间戳技术的认证协议。将标签的标识和时间戳数据通过Hash函数进行加密传输并进行认证。通过BAN逻辑证明和建立协议的Petri网模型仿真实验证明了该协议具有良好的前向安全性,能有效防止重放、位置跟踪、非法访问等攻击。%To settle the potential security problems during the authentication of radio frequency identification,an authen-tication protocol combined with Hash function and time stamp was proposed.The tag’s identification and time stamp data were encrypted and transmitted through the Hash function,when they were used to authenticate.The ban logic proof and the simulative experiment of established Petri model showe the protocol has good forward security and can prevent replay,location tracking,illegal reading and other illegal attacks.

  13. 基于Hash和CAM的IPv6路由查找算法%IPv6 Routing Lookup Algorithm Based on Hash and CAM

    Institute of Scientific and Technical Information of China (English)

    王瑞青; 杜慧敏; 王亚刚

    2012-01-01

    分析实际网络中的IPv6前缀分布规律与增长趋势,提出一种基于Hash和内容可寻址存储器(CAM)的IPv6路由查找算法.将长度能被8整除的前缀存储在8个Hash表中,发生Hash冲突的前缀存储在CAM中,长度不能被8整除的前缀按照一定的组织方式存储在随机存取存储器中.分析结果表明,该算法具有较高的存储利用率、查找速率及更新速率,并且易于扩展和硬件实现.%This paper presents an IPv6 lookup algorithm based on Hash and Content Addressable Memory(CAM) by analyzing the prefix length distribution of routing table and the growth trend of routing table entries. The prefixes whose length can be divided by 8 are stored in 8 Hash tables, and the remaining prefixes are stored into expanded Random Access Memory(RAM). Analysis result shows that the algorithm has high efficient storage utilization, searching rate and updating rate. It is easy to be scalded in hardware.

  14. Key Pre-Distribution Management Scheme for Wireless Sensor Based on Hash%基于Hash的无线传感器密钥预分配方案

    Institute of Scientific and Technical Information of China (English)

    余嘉; 许可; 彭文兵

    2011-01-01

    提出了一种基于查找表映射Hash的无线传感器网络密钥预分配管理方案.该方案利用查找表加密算法同时具有加密和生成Hash值的特性,动态地生成节点间通信的公共密钥.引入分簇节点方式来提高网络的连通性和存活率,并节省了存储空间.仿真结果及实验分析表明,本方案具有良好的安全性、连通性和抗捕获能力,并可有效地利用存储空间.%Key pre-distribution and management scheme of wireless sensor network based on Hash constructed by look-up table is proposed in this paper. The scheme dynamically generates the public keys for nodes communication through the look-up table encryption algorithm, which can encrypt and hash the text at the same time. To improve the connectivity and livability of the network and also to save the memory, the clustered sensor network style is introduced. Theoretical analysis and computer simulation indicate that the presented scheme can improve the performance of security, connectivity, anti-captured ability and can effectively utilize the memory as well.

  15. A Quick Algorithm for Value Reduction Based on Hash Algorithm%一种基于Hash的快速值约简方法

    Institute of Scientific and Technical Information of China (English)

    张清华; 幸禹可

    2011-01-01

    本文在研究粗糙集、决策树与粒计算的基础上,结合Hash算法快速、高效的特点,提出了一种基于Hash的快速值约简方法.该方法在处理信息系统过程中,能够快速划分等价类,并计算出正区域;在基于粗糙集理论针对每一个属性进行属性约简和值约简的过程中,利用Hash方法能够对数据压缩的特点,实现快速高效的规则提取.通过仿真实验显示,与一般的值约简方法相比,本方法在时间复杂性上具有优势.%A new quick value reduction method is proposed based on rough set theory,decision tree theory and granular computing theory. Firstly,the characteristic of data is analyzed by rough set theory,meanwhile, using Hash algorithms partition composed of all equivalence classes is obtained and the positive region is calculated,then,value reduction can be completed quickly due to the advantage of Hash algorithm. Compared with traditional algorithms,analysis and simulation results show the proposed algorithm has lower time complexity.

  16. RFID cryptographic protocol based on two-dimensional region Hash chain%基于二维区间Hash链的RFID安全协议

    Institute of Scientific and Technical Information of China (English)

    熊宛星; 薛开平; 洪佩琳; 麻常莎

    2011-01-01

    Due to the limitation of relevant devices, a lot of security problems exist in a radio frequency identification (RFID) system, one of the core technologies of the future internet of things (IOT). A new protocol based on the two-dimensional region (TDR) Hash chains was proposed after the core ideas of several typical RFID cryptographic protocols were analyzed. TDR could significantly improve the efficiency of database retrieval by identifying each Hash chain with region division. Moreover, a random number was introduced to further enhance the security of RFID systems.%作为未来物联网(IOT)的核心技术之一,无线射频识别(RFID)系统由于设备的局限性而存在许多安全问题.在分析几种典型安全协议核心思想的基础上,提出了基于二维区间(two-dimensional region,TDR) Hash链的安全协议.该协议以区间划分的方式标识各链,从而提高了数据库的检索效率;同时,由于在协议中引入了随机性,RFID系统的安全性得到了进一步增强.

  17. A Hash-based P2P Overlay In Mobile Environment%移动环境中一种基于Hash的P2P覆盖网

    Institute of Scientific and Technical Information of China (English)

    杨晓辉; 黄长俊; 许熠

    2009-01-01

    目前提出的大多数基于哈希(hash-based)的P2P网络都集中于固定的对等节点.当节点移动到网络中一个新的位置时,这种结构在消息传递等方面的效率就会下降.文章提出一种移动环境中的基于哈希的P2P覆盖网(Hash-based P2P Overlay in Mobile Environment,H-MP2P),允许节点在网络中自由移动.一个节点可通过P2P网络广播其位置信息,其他节点通过网络可以获知该节点的移动信息并进行定位.通过理论分析和实验可知H-MP2P在扩展性、可靠性和效率方面都可以取得较好的结果.可以很好的应用在移动环境中.

  18. Latent Integrated Stochastic Volatility, Realized Volatility, and Implied Volatility: A State Space Approach

    DEFF Research Database (Denmark)

    Bach, Christian; Christensen, Bent Jesper

    We include simultaneously both realized volatility measures based on high-frequency asset returns and implied volatilities backed out of individual traded at the money option prices in a state space approach to the analysis of true underlying volatility. We model integrated volatility as a latent...... fi…rst order Markov process and show that our model is closely related to the CEV and Barndorff-Nielsen & Shephard (2001) models for local volatility. We show that if measurement noise in the observable volatility proxies is not accounted for, then the estimated autoregressive parameter in the latent...... process is downward biased. Implied volatility performs better than any of the alternative realized measures when forecasting future integrated volatility. The results are largely similar across the stock market (S&P 500), bond market (30-year U.S. T-bond), and foreign currency exchange market ($/£ )....

  19. Selecting the Best Forecasting-Implied Volatility Model Using Genetic Programming

    Directory of Open Access Journals (Sweden)

    Wafa Abdelmalek

    2009-01-01

    Full Text Available The volatility is a crucial variable in option pricing and hedging strategies. The aim of this paper is to provide some initial evidence of the empirical relevance of genetic programming to volatility's forecasting. By using real data from S&P500 index options, the genetic programming's ability to forecast Black and Scholes-implied volatility is compared between time series samples and moneyness-time to maturity classes. Total and out-of-sample mean squared errors are used as forecasting's performance measures. Comparisons reveal that the time series model seems to be more accurate in forecasting-implied volatility than moneyness time to maturity models. Overall, results are strongly encouraging and suggest that the genetic programming approach works well in solving financial problems.

  20. On a Pair of Operator Series Expansions Implying a Variety of Summation Formulas

    Institute of Scientific and Technical Information of China (English)

    Leetsch C. Hsu∗

    2015-01-01

    With the aid of Mullin-Rota’s substitution rule, we show that the Sheffer-type differential operators together with the delta operators∆and D could be used to construct a pair of expansion formulas that imply a wide variety of summation formu-las in the discrete analysis and combinatorics. A convergence theorem is established for a fruitful source formula that implies more than 20 noted classical fomulas and i-dentities as consequences. Numerous new formulas are also presented as illustrative examples. Finally, it is shown that a kind of lifting process can be used to produce certain chains of (∞m ) degree formulas for m≥3 with m≡1 (mod 2) and m≡1 (mod 3), respectively.

  1. HISTORICAL AND IMPLIED VOLATILITY: AN INVESTIGATION INTO NSE NIFTY FUTURES AND OPTIONS

    Directory of Open Access Journals (Sweden)

    N R Parasuraman

    2011-10-01

    Full Text Available The broad objective of the paper is to have an understanding of the movement of volatility over a fair period in respect of the market portfolio. Also, it enables an understanding on how divergent the implied volatility has been from this estimate. It uses Volatility Cone, Volatility Smile and Volatility Surface as the parameters. The study takes different rolling periods percentiles of volatility. Hoadley Options Calculator is used for calculation and analysis purpose. The study empirically proves that there is a clear reversion to the mean as indicated by the volatility cone. The study of volatility smiles in respect of NIFTY options throws up different patterns. The Garch (1.1 model reveals that historical volatility for the period from 2004 to 2004 and for the year 2009 were estimated. Interestingly, but not totally surprisingly, the average implied volatility of calls and puts on Nifty during the period January to March 2010 showed differences.

  2. Portfolio Optimization under Local-Stochastic Volatility: Coefficient Taylor Series Approximations & Implied Sharpe Ratio

    OpenAIRE

    Matthew Lorig; Ronnie Sircar

    2015-01-01

    We study the finite horizon Merton portfolio optimization problem in a general local-stochastic volatility setting. Using model coefficient expansion techniques, we derive approximations for the both the value function and the optimal investment strategy. We also analyze the `implied Sharpe ratio' and derive a series approximation for this quantity. The zeroth-order approximation of the value function and optimal investment strategy correspond to those obtained by Merton (1969) when the risky...

  3. Implied Volatility Futures Trading Activity and Impacts on Asian Stock Market: An Empirical study

    OpenAIRE

    Pham, Duc Nam Trung

    2014-01-01

    This study analyzes impacts of the adoption of a new type of derivatives instrument in the Asian stock market- the implied volatility futures. Furthermore, the analysis is carried on to the preferences of hedging tools in the two pioneering markets in such adoption, Hong Kong and Japan. Unlike other conventional derivatives, the relationship derivatives on volatility and its underlying assets is almost impossible to be modeled, thus creates several difficulties in pricing as well as researchi...

  4. Implied and Local Volatility Surfaces for South African Index and Foreign Exchange Options

    Directory of Open Access Journals (Sweden)

    Antonie Kotzé

    2015-01-01

    Full Text Available Certain exotic options cannot be valued using closed-form solutions or even by numerical methods assuming constant volatility. Many exotics are priced in a local volatility framework. Pricing under local volatility has become a field of extensive research in finance, and various models are proposed in order to overcome the shortcomings of the Black-Scholes model that assumes a constant volatility. The Johannesburg Stock Exchange (JSE lists exotic options on its Can-Do platform. Most exotic options listed on the JSE’s derivative exchanges are valued by local volatility models. These models needs a local volatility surface. Dupire derived a mapping from implied volatilities to local volatilities. The JSE uses this mapping in generating the relevant local volatility surfaces and further uses Monte Carlo and Finite Difference methods when pricing exotic options. In this document we discuss various practical issues that influence the successful construction of implied and local volatility surfaces such that pricing engines can be implemented successfully. We focus on arbitrage-free conditions and the choice of calibrating functionals. We illustrate our methodologies by studying the implied and local volatility surfaces of South African equity index and foreign exchange options.

  5. An inverse problem of determining the implied volatility in option pricing

    Science.gov (United States)

    Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu

    2008-04-01

    In the Black-Scholes world there is the important quantity of volatility which cannot be observed directly but has a major impact on the option value. In practice, traders usually work with what is known as implied volatility which is implied by option prices observed in the market. In this paper, we use an optimal control framework to discuss an inverse problem of determining the implied volatility when the average option premium, namely the average value of option premium corresponding with a fixed strike price and all possible maturities from the current time to a chosen future time, is known. The issue is converted into a terminal control problem by Green function method. The existence and uniqueness of the minimum of the control functional are addressed by the optimal control method, and the necessary condition which must be satisfied by the minimum is also given. The results obtained in the paper may be useful for those who engage in risk management or volatility trading.

  6. 结合Harris角点检测和不变质心的稳健图像Hash算法%Robust image Hash algorithm based on Harris corners and invariant centroid

    Institute of Scientific and Technical Information of China (English)

    崔得龙; 左敬龙; 彭志平

    2011-01-01

    提出了应用Harris角点检测和不变质心的图像Hash算法.算法从仿射变换的数学模型出发,利用仿射前后图像质心位置的不变特性,计算Harris角点与不变质心的欧氏距离作为特征向量,最后经编码量化产生图像Hash.实验结果表明:本算法对视觉可接受的JPEG压缩、滤波等具有良好的鲁棒性,而恶意扰动或篡改则会改变Hash值.密钥的使用保证了Hash的安全性.%A novel image Hash algorithm using Harris corners and invariant centroid is proposed. Originating from the mathematical model of affine transform, the distances between Harris corners and invariant centroid are calculated as feature vector,which based on the invariance of centroid of images in the affine transform,finally the feature vectors are compressed to generate the image Hash. Experimental results show that the proposed scheme is robust against perceptually acceptable modifications to the image such as JPEG compression, filtering, while sensitive to excessive changes and malicious tampering. Security of the Hash is guaranteed by using secret keys.

  7. A chimeric fusion of the hASH1 and EZH2 promoters mediates high and specific reporter and suicide gene expression and cytotoxicity in small cell lung cancer cells

    DEFF Research Database (Denmark)

    Poulsen, T.T.; Pedersen, N.; Juel, H.

    2008-01-01

    Transcriptionally targeted gene therapy is a promising experimental modality for treatment of systemic malignancies such as small cell lung cancer (SCLC). We have identified the human achaete-scute homolog 1 (hASH1) and enhancer of zeste homolog 2 (EZH2) genes as highly upregulated in SCLC compar...

  8. 基于非负矩阵分解的鲁棒哈希函数验证性研究%Confirmatory study of the robust Hash function based on the decomposition of non-negative matrices

    Institute of Scientific and Technical Information of China (English)

    吴荣玉; 樊丰; 舒建

    2012-01-01

    Matrix factorization is an effective tool to realize mass data processing and analysis. Non-negative matrix factorization is a kind of orthogonal transformation. It can realize non-negative decomposition in the condition that all elements are non-negative. The technology of robust Hash use the secret key to extract some robust features from multimedia content, then,these features are compressed to produce hash value. We can authenticate the authenticity of media content through comparing the Hash transited along with the media content with the Hash produced by receiver.%矩阵分解是实现大规模数据处理与分析的一种有效工具.矩阵的非负矩阵分解NMF(Non-Negative Matrix Factorization)变换是一种正交变换,是在矩阵中所有元素均为非负的条件下对其实现的非负分解.鲁棒哈希技术利用密钥提取多媒体内容的某些鲁棒特征,通过进一步压缩产生哈希值,通过比较跟随媒体内容传送来的哈希和接收端产生的哈希,实现对媒体内容的真实性认证.

  9. 基于时滞混沌系统的带密钥Hash函数的设计与分析%Design and Analysis of a Cryptographic Hash Function Based on Time-Delay Chaotic System

    Institute of Scientific and Technical Information of China (English)

    徐杰; 杨娣洁; 隆克平

    2011-01-01

    An algorithm of cryptographic hash function based on time-delay chaotic system is presented in this paper. In this algorithm, initial message is modulated into time-delay chaotic iteration, and the Hash value can be calculated by a HMAC-MD5 algorithm. Thus, every bit of this Hash value is correlative with initial message,and this Hash value is very sensitive to micro changes of the initial message or the initial condition of chaotic system. By theory analyses and simulations, we obtain that the Hash value has irregularity and diffusion properties,and the parameter space is augmented because of the properties of chaos. The nonlinear relation between hash value and initial message can be effectively against linear analysis. Therefore, this Hash function based on time-delay chaotic system can get better anti-attack and anti-collision capacity.%提出了一种基于时滞混沌系统的带密钥Hash函数算法,该算法利用时滞混沌系统非线性动力学特性,将需要传送的明文信息调制在时滞混沌迭代的轨迹中,并通过HMAC-MD5算法计算得出Hash值,Hash值的每个比特都与需传送的明文信息相关.该算法使Hash值对明文信息及时滞混沌迭代初始条件的微小变化高度敏感.理论分析和仿真结果均表明,该算法在保证Hash值的混乱性和散布性的同时,由于其混沌特性的加入而增大了参数空间,并且混沌Hash值与初始明文信息之间的非线性关系可以有效地抵御线性分析.因此,本文设计的基于时滞混沌系统的Hash函数算法具有很好的安全性、抗碰撞性和抗攻击能力,在数字签名等认证技术领域有很好的应用前景.

  10. Image Retrieval Based on Deep Convolutional Neural Networks and Binary Hashing Learning%基于深度卷积神经网络和二进制哈希学习的图像检索方法

    Institute of Scientific and Technical Information of China (English)

    彭天强; 栗芳

    2016-01-01

    With the increasing amount of image data, the image retrieval methods have several drawbacks, such as the low expression ability of visual feature, high dimension of feature, low precision of image retrieval and so on. To solve these problems, a learning method of binary hashing based on deep convolutional neural networks is proposed, which can be used for large-scale image retrieval. The basic idea is to add a hash layer into the deep learning framework and to learn simultaneously image features and hash functions should satisfy independence and quantization error minimized. First, convolutional neural network is employed to learn the intrinsic implications of training images so as to improve the distinguish ability and expression ability of visual feature. Second, the visual feature is putted into the hash layer, in which hash functions are learned. And the learned hash functions should satisfy the classification error and quantization error minimized and the independence constraint. Finally, an input image is given, hash codes are generated by the output of the hash layer of the proposed framework and large scale image retrieval can be accomplished in low-dimensional hamming space. Experimental results on the three benchmark datasets show that the binary hash codes generated by the proposed method has superior performance gains over other state-of-the-art hashing methods.%随着图像数据的迅猛增长,当前主流的图像检索方法采用的视觉特征编码步骤固定,缺少学习能力,导致其图像表达能力不强,而且视觉特征维数较高,严重制约了其图像检索性能。针对这些问题,该文提出一种基于深度卷积神径网络学习二进制哈希编码的方法,用于大规模的图像检索。该文的基本思想是在深度学习框架中增加一个哈希层,同时学习图像特征和哈希函数,且哈希函数满足独立性和量化误差最小的约束。首先,利用卷积神经网络强大的

  11. THE SPACE MOTION OF LEO I: HUBBLE SPACE TELESCOPE PROPER MOTION AND IMPLIED ORBIT

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Sangmo Tony; Van der Marel, Roeland P. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Besla, Gurtina [Department of Astronomy, Columbia University, New York, NY 10027 (United States); Boylan-Kolchin, Michael; Bullock, James S. [Department of Physics and Astronomy, Center for Cosmology, University of California, 4129 Reines Hall, Irvine, CA 92697 (United States); Majewski, Steven R., E-mail: tsohn@stsci.edu [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States)

    2013-05-10

    We present the first absolute proper motion measurement of Leo I, based on two epochs of Hubble Space Telescope ACS/WFC images separated by {approx}5 years in time. The average shift of Leo I stars with respect to {approx}100 background galaxies implies a proper motion of ({mu}{sub W}, {mu}{sub N}) = (0.1140 {+-} 0.0295, -0.1256 {+-} 0.0293) mas yr{sup -1}. The implied Galactocentric velocity vector, corrected for the reflex motion of the Sun, has radial and tangential components V{sub rad} = 167.9 {+-} 2.8 km s{sup -1} and V{sub tan} = 101.0 {+-} 34.4 km s{sup -1}, respectively. We study the detailed orbital history of Leo I by solving its equations of motion backward in time for a range of plausible mass models for the Milky Way (MW) and its surrounding galaxies. Leo I entered the MW virial radius 2.33 {+-} 0.21 Gyr ago, most likely on its first infall. It had a pericentric approach 1.05 {+-} 0.09 Gyr ago at a Galactocentric distance of 91 {+-} 36 kpc. We associate these timescales with characteristic timescales in Leo I's star formation history, which shows an enhanced star formation activity {approx}2 Gyr ago and quenching {approx}1 Gyr ago. There is no indication from our calculations that other galaxies have significantly influenced Leo I's orbit, although there is a small probability that it may have interacted with either Ursa Minor or Leo II within the last {approx}1 Gyr. For most plausible MW masses, the observed velocity implies that Leo I is bound to the MW. However, it may not be appropriate to include it in models of the MW satellite population that assume dynamical equilibrium, given its recent infall. Solution of the complete (non-radial) timing equations for the Leo I orbit implies an MW mass M{sub MW,vir} = 3.15{sub -1.36}{sup +1.58} x 10{sup 12} M{sub Sun }, with the large uncertainty dominated by cosmic scatter. In a companion paper, we compare the new observations to the properties of Leo I subhalo analogs extracted from cosmological

  12. Large fault fabric of the Ninetyeast Ridge implies near-spreading ridge formation

    Digital Repository Service at National Institute of Oceanography (India)

    Sager, W.W.; Paul, C.F; Krishna, K.S.; Pringle, M.S.; Eisin, A.E.; Frey, F; Rao, D; Levchenko, O.V.

    Large Fault Fabric of the Ninetyeast Ridge Implies Near-Spreading Ridge Formation 1 W. W. Sager 1 *, C. F. Paul 1 , K. S. Krishna 2 , M. Pringle 3 , A. E. Eisin 1,4 , F. A. Frey 3 , D. Gopala 2 Rao 5 , O. Levchenko 6 3 1 Department of Oceanography....proc.sr.121.122.1991. 254 Sandwell, D. T., and W. H. F. Smith (2009), Global marine gravity from retracked Geosat and 255 ERS-1 Altimetry: Ridge segmentation versus spreading rate, J. Geophys. Res., 114, 1-16, 256 doi:10.10029/2008JB006008, 2009. 257...

  13. 分组一致性哈希数据分割方法%Data segmentation method of grouping consistent hash

    Institute of Scientific and Technical Information of China (English)

    武小年; 方堃; 杨宇洋

    2016-01-01

    针对分布式入侵检测系统进行数据分割时面临的数据完整性和负载均衡问题,提出一种分组一致性哈希数据分割方法.采用TCP流重组技术保证数据的完整性;在对数据进行分割时,采用改进的分组一致性哈希算法,将具有相近计算能力的结点分为一组,根据组的计算能力,将各组按比例交替映射到整个数据哈希值计算对应的空间;在数据分配时,对结点的负载进行检测和动态调整.仿真测试结果表明,该方法具有较高的检测率,算法所需虚拟结点数量减少,降低了内存占用,提高了系统的负载均衡性.%Aiming at data integrity and load balance problem of distributed intrusion detection system for data segmentation,a data segmentation method of grouping consistent hash was proposed.TCP stream reassembly technology was used to ensure data integrity.When data were divided,the improved grouping consistent hash algorithm was used,and nodes with similar computing capacity were divided into a group.According to the computing capacity of group and proportion,each group was mapped alter-nately in the space corresponding to the hash value calculated.And the load of the node was detected and dynamically adj usted when data were assigned to the node.Results of simulation show that,the method has higher detection rate,the number of re-quired virtual node and the memory consumption are reduced,and load balance of the system is improved.

  14. The Space Motion of Leo I: Hubble Space Telescope Proper Motion and Implied Orbit

    CERN Document Server

    Sohn, Sangmo Tony; van der Marel, Roeland P; Boylan-Kolchin, Michael; Majewski, Steven R; Bullock, James S

    2012-01-01

    We present the first absolute proper motion measurement of Leo I, based on two epochs of HST ACS/WFC images separated by ~5 years. The average shift of Leo I stars with respect to ~100 background galaxies implies a proper motion of (mu_W, mu_N) = (0.1140 +/- 0.0295, -0.1256 +/- 0.0293) mas/yr. The implied Galactocentric velocity vector, corrected for the reflex motion of the Sun, has radial and tangential components V_rad = 167.9 +/- 2.8 km/s and V_tan = 101.0 +/- 34.4 km/s, respectively. We study the detailed orbital history of Leo I by solving its equations of motion backward in time for a range of plausible mass models for the Milky Way and its surrounding galaxies. Leo I entered the Milky Way virial radius 2.33 +/- 0.21 Gyr ago, most likely on its first infall. It had a pericentric approach 1.05 +/- 0.09 Gyr ago at a Galactocentric distance of 91 +/- 36 kpc. We associate these time scales with characteristic time scales in Leo I's star formation history, which shows an enhanced star formation activity ~2 ...

  15. Scalar utility theory and proportional processing: What does it actually imply?

    Science.gov (United States)

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2016-09-01

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed.

  16. Derandomization, Hashing and Expanders

    DEFF Research Database (Denmark)

    Ruzic, Milan

    that independent and unbiased random bits are accessible. However, truly random bits are scarce in reality. In practice, pseudorandom generators are used in place of random numbers; usually, even the seed of the generator does not come from a source of true random- ness. While things mostly work well in practice...... of work in this direction. There has been a lot of work in designing general tools for simulating randomness and making deterministic versions of randomized algorithms, with some loss in time and space performance. These methods are not tied to particular algorithms, but work on large classes of problems....... The central question in this area of computational complexity is \\P=BPP?". Instead of derandomizing whole complexity classes, one may work on derandomizing concrete problems. This approach trades generality for possibility of having much better performance bounds. There are a few common techniques...

  17. On the Construction of 20×20 and 24×24 Binary Matrices with Good Implementation Properties for Lightweight Block Ciphers and Hash Functions

    Directory of Open Access Journals (Sweden)

    Muharrem Tolga Sakallı

    2014-01-01

    Full Text Available We present an algebraic construction based on state transform matrix (companion matrix for n×n (where n≠2k, k being a positive integer binary matrices with high branch number and low number of fixed points. We also provide examples for 20×20 and 24×24 binary matrices having advantages on implementation issues in lightweight block ciphers and hash functions. The powers of the companion matrix for an irreducible polynomial over GF(2 with degree 5 and 4 are used in finite field Hadamard or circulant manner to construct 20×20 and 24×24 binary matrices, respectively. Moreover, the binary matrices are constructed to have good software and hardware implementation properties. To the best of our knowledge, this is the first study for n×n (where n≠2k, k being a positive integer binary matrices with high branch number and low number of fixed points.

  18. Video perceptual hashing fuse computational model of human visual system%融合HVS计算模型的视频感知哈希算法研究

    Institute of Scientific and Technical Information of China (English)

    欧阳杰; 高金花; 文振焜; 张盟; 刘朋飞; 杜以华

    2011-01-01

    Perceptual hashing is a function of mapping from multimedia digital presentations to a perceptual hash value, which provides a secure and reliable technical support in fields such as identification, retrieval, and certification of multimedia content. The current algorithms fail in taking sufficient human visual perceptual factors into consideration. With the improvement of their over-robustness, most of the algorithms can' t assure their securities. In this paper, a novel perceptual hashing algorithm is proposed. In order to simulate multi-channel features of the human visual system, a cortex transformation is combined with a computational model of the human visual system, which is designed by jointly considering four visual perceptual factors during the feature extraction stage, such as spatio-temporal contrast sensitivity function, eye movement, lightness adaptation, and intra-band and inter-band masking. Additionally, a diffusion mechanism is introduced into the preprocessing stage. The results suggest our proposed method could achieve better trade-offs between robust and secure resilient to various content-preserving manipulations, and also reflects the uniformity between subjective perception and objective evaluation.%感知哈希(perceptual hashing)是多媒体数据集到摘要集的单向映射,为多媒体数字内容的标识、检索、认证等应用提供了安全可靠的技术支撑.目前关于感知哈希算法的研究主要集中在不断提高其鲁棒性和安全性上,忽略了人的主要视觉感知特性,导致了算法的过鲁棒性问题.将人类视觉系统可计算模型融入视频感知哈希算法框架中,用模拟人眼感受野特征提取特性的Cortex变换进行通道分解,并使用时-空域对比度敏感函数、眼球移动函数、亮度适应性调整函数、子带内和子带间对比度掩蔽函数综合计算最小视觉差提取感知特征.在保证较好鲁棒性的前提下,算法中使用扩散分块的机

  19. GBLHT: a GPU-Accelerated Linear Hash Table with Batch Insertion%GBLHT:—种GPU加速的批量插入线性哈希表

    Institute of Scientific and Technical Information of China (English)

    黄玉龙; 奚建清; 张平健; 方晓霖; 刘勇

    2012-01-01

    In order to improve the insertion performance of linear Hash table known as an effective index structure, the existing insertion methods are analyzed, and a linear Hash table GBLHT with batch records insertion, which is combined with the CUDA parallel programming model, is designed and implemented. With the help of the atomic function atomic Add, GBLHT takes full advantage of the high parallel throughput of graphic processing unit (GPU) to implement the lock-free batch insertion of massive records. Some experiments are then carried out to compare the insertion performances of the traditional serial insertion method, the CPU-based batch insertion method and GBLHT. The results show that, under various parameter conditions, the insertion performance of GBLHT is 7 ~ 14 times higher than that of the traditional serial method, and is 3 ~ 6 times higher than that of the CPU-based batch insertion method with four threads.%为改善线性哈希表这一有效索引结构的插入性能,在分析现有方法的基础上,结合CUDA并行编程模型,设计并实现了一种基于GPU的批量插入线性哈希表GBLHT;借助原子函数atomicAdd,GBLHT可以充分利用GPU强大的并行吞吐量来实现大规模记录的无锁批量插入;通过实验对比传统串行插入方法、CPU批量插入方法以及GBLHT的插入性能,发现在不同参数设置条件下,GBLHT的插入性能比传统串行方式提升了7~14倍,与4线程的CPU批量插入方法相比则提升了3~6倍.

  20. Fast Continuous Weak Hashes in Strings and Its Applications%串的快速连续弱哈希及其应用

    Institute of Scientific and Technical Information of China (English)

    徐泽明; 侯紫峰

    2011-01-01

    提出串的快速连续弱哈希(fast continuous weak Hash,简称FCWH),并研究它在理论和工程上的应用.首先提出FCWH的概念,从代数结构角度统一规划该类哈希的构造框架;然后对哈希冲突概率进行理论分析和实验数据分析,推广并加强了Rabin的相关工作;最后,通过推广串匹配的Karp-Rabin算法,应用FCWH解决顺序抽取公共子串问题(sequential extraction of common substrings,简称SECS),并据此设计快速同步协议X-Sync来解决当今宽带网络和云计算环境下文档多版本内容的实时备份检索.%In this paper, the fast continuous weak Hash (FCWH) in strings is proposed and its theoretic and practical applications are investigated.First, FCWH is conceptualized and a uniform construction framework for FCWH is formulized from an algebraic viewpoint.Secondly, the theoretical and experimental collision probabilities of FCWH are analyzed, and the related work by Michael O.Rabin is generalized and strengthened.Finally, by generalizing the Karp-Rabin algorithm for string-matching problem, FCWH is applied to solve the problem of sequential extraction of common substrings (SECS), and based on SECS, the express synchronization (X-Sync)protocol is designed to address the issue of real-time backup and the retrieval of multiple versions of a given document in the current environment of broadband communication network and cloud computing.

  1. The implied volatility index: Is ‘investor fear gauge’ or ‘forward-looking’?

    Directory of Open Access Journals (Sweden)

    Imlak Shaikh

    2015-03-01

    Full Text Available The paper aims to examine implied volatility as the investor fear gauge or/and forward-looking expectation of future stock market volatility within emerging markets setting-India VIX. The earliest results evidenced that VIX is the gauge of investor fear, where in the expected stock market volatility rises when the given market is declined. It is also proven that expected volatility is being unbiased estimate of the actual return volatility (30-calendar days; hence, during the market turmoil VIX likely to be biased. Lastly, it is suggested that the nervousness of investor yields potential profit to the options seller (market crises. Thus, our research has practical implications for various reasons, such as, portfolio risk management, stock market volatility forecast, and options pricing.

  2. A homological selection theorem implying a division theorem for Q-manifolds

    OpenAIRE

    2015-01-01

    We prove that a space ▫$M$▫ with the Disjoint Disk Property is a ▫$Q$▫-manifold if and only if ▫$M times X$▫ is a ▫$Q$▫-manifold for some ▫$C$▫-space ▫$X$▫. This implies that the product ▫$M times I^2$▫ of a space $M$ with the disk is a ▫$Q$▫-manifold if and only if ▫$M times X$▫ is a ▫$Q$▫-manifold for some ▫$C$▫-space ▫$X$▫. The proof of these theorems exploits the homological characterization of ▫$Q$▫-manifolds due to R. J. Daverman and J. J. Walsh, combined with the existence of ▫$G$▫-sta...

  3. Cathepsin B gene disruption induced Leishmania donovani proteome remodeling implies cathepsin B role in secretome regulation.

    Directory of Open Access Journals (Sweden)

    Teklu Kuru Gerbaba

    Full Text Available Leishmania cysteine proteases are potential vaccine candidates and drug targets. To study the role of cathepsin B cysteine protease, we have generated and characterized cathepsin B null mutant L. donovani parasites. L. donovani cathepsin B null mutants grow normally in culture, but they show significantly attenuated virulence inside macrophages. Quantitative proteome profiling of wild type and null mutant parasites indicates cathepsin B disruption induced remodeling of L. donovani proteome. We identified 83 modulated proteins, of which 65 are decreased and 18 are increased in the null mutant parasites, and 66% (55/83 of the modulated proteins are L. donovani secreted proteins. Proteins involved in oxidation-reduction (trypanothione reductase, peroxidoxins, tryparedoxin, cytochromes and translation (ribosomal proteins are among those decreased in the null mutant parasites, and most of these proteins belong to the same complex network of proteins. Our results imply virulence role of cathepsin B via regulation of Leishmania secreted proteins.

  4. Interannual variability of carbon cycle implied by a 2-D atmospheric transport model

    Institute of Scientific and Technical Information of China (English)

    LI Can; XU Li; SHAO Min; ZHANG Ren-jian

    2004-01-01

    A 2-dimensional atmospheric transport model is deployed in a simplified CO2 inverse study. Calculated carbon flux distribution for the interval from 1981 to 1997 confirms the existence of a terrestrial carbon sink in mid-high latitude area of North Hemisphere. Strong interannual variability exists in carbon flux patterns, implying a possible link with ENSO and other natural episodes such as Pinatubo volcano eruption in 1991. Mechanism of this possible link was investigated with statistic method. Correlation analysis indicated that in North Hemisphere, climatic factors such as temperature and precipitation, to some extend, could influence the carbon cycle process of land and ocean, thus cause considerable change in carbon flux distribution. In addition, correlation study also demonstrated the possible important role of Asian terrestrial ecosystems in carbon cycle.

  5. Minimal H{\\"o}lder regularity implying finiteness of integral Menger curvature

    CERN Document Server

    Kolasi{ń}ski, Sławomir

    2011-01-01

    We study two families of integral functionals indexed by a real number $p > 0$. One family is defined for 1-dimensional curves in $\\R^3$ and the other one is defined for $m$-dimensional manifolds in $\\R^n$. These functionals are described as integrals of appropriate integrands (strongly related to the Menger curvature) raised to power $p$. Given $p > m(m+1)$ we prove that $C^{1,\\alpha}$ regularity of the set (a curve or a manifold), with $\\alpha > \\alpha_0 = 1 - \\frac{m(m+1)}p$ implies finiteness of both curvature functionals ($m=1$ in the case of curves). We also show that $\\alpha_0$ is optimal by constructing examples of $C^{1,\\alpha_0}$ functions with graphs of infinite integral curvature.

  6. Relative sensory sparing in the diabetic foot implied through vibration testing

    Directory of Open Access Journals (Sweden)

    Todd O'Brien

    2013-09-01

    Full Text Available Background: The dorsal aspect of the hallux is often cited as the anatomic location of choice for vibration testing in the feet of diabetic patients. To validate this preference, vibration tests were performed and compared at the hallux and 5th metatarsal head in diabetic patients with established neuropathy. Methods: Twenty-eight neuropathic, diabetic patients and 17 non-neuropathic, non-diabetic patients underwent timed vibration testing (TVT with a novel 128 Hz electronic tuning fork (ETF at the hallux and 5th metatarsal head. Results: TVT values in the feet of diabetic patients were found to be reduced at both locations compared to controls. Unexpectedly, these values were significantly lower at the hallux (P < 0.001 compared to the 5th metatarsal head. Conclusion: This study confirms the hallux as the most appropriate location for vibration testing and implies relative sensory sparing at the 5th metatarsal head, a finding not previously reported in diabetic patients.

  7. The dynamic conditional relationship between stock market returns and implied volatility

    Science.gov (United States)

    Park, Sung Y.; Ryu, Doojin; Song, Jeongseok

    2017-09-01

    Using the dynamic conditional correlation multivariate generalized autoregressive conditional heteroskedasticity (DCC-MGARCH) model, we empirically examine the dynamic relationship between stock market returns (KOSPI200 returns) and implied volatility (VKOSPI), as well as their statistical mechanics, in the Korean market, a representative and leading emerging market. We consider four macroeconomic variables (exchange rates, risk-free rates, term spreads, and credit spreads) as potential determinants of the dynamic conditional correlation between returns and volatility. Of these macroeconomic variables, the change in exchange rates has a significant impact on the dynamic correlation between KOSPI200 returns and the VKOSPI, especially during the recent financial crisis. We also find that the risk-free rate has a marginal effect on this dynamic conditional relationship.

  8. A violation of the uncertainty principle implies a violation of the second law of thermodynamics.

    Science.gov (United States)

    Hänggi, Esther; Wehner, Stephanie

    2013-01-01

    Uncertainty relations state that there exist certain incompatible measurements, to which the outcomes cannot be simultaneously predicted. While the exact incompatibility of quantum measurements dictated by such uncertainty relations can be inferred from the mathematical formalism of quantum theory, the question remains whether there is any more fundamental reason for the uncertainty relations to have this exact form. What, if any, would be the operational consequences if we were able to go beyond any of these uncertainty relations? Here we give a strong argument that justifies uncertainty relations in quantum theory by showing that violating them implies that it is also possible to violate the second law of thermodynamics. More precisely, we show that violating the uncertainty relations in quantum mechanics leads to a thermodynamic cycle with positive net work gain, which is very unlikely to exist in nature.

  9. Comparison of different gravity field implied density models of the topography

    Science.gov (United States)

    Sedighi, Morteza; Tabatabaee, Seied; Najafi-Alamdari, Mehdi

    2009-06-01

    Density within the Earth crust varies between 1.0 and 3.0 g/cm3. The Bouguer gravity field measured in south Iran is analyzed using four different regional-residual separation techniques to obtain a residual map of the gravity field suitable for density modeling of topography. A density model of topography with radial and lateral distribution of density is required for an accurate determination of the geoid, e.g., in the Stokes-Helmert approach. The apparent density mapping technique is used to convert the four residual Bouguer anomaly fields into the corresponding four gravity im-plied subsurface density (GRADEN) models. Although all four density models showed good correlation with the geological density (GEODEN) model of the region, the GRADEN models obtained by high-pass filter-ing and GGM high-pass filtering show better numerical correlation with GEODEN model than the other models.

  10. Universal Property of Quantum Gravity implied by Bekenstein-Hawking Entropy and Boltzmann formula

    CERN Document Server

    Saida, Hiromi

    2013-01-01

    We search for a universal property of quantum gravity. By "universal", we mean the independence from any existing model of quantum gravity (such as the super string theory, loop quantum gravity, causal dynamical triangulation, and so on). To do so, we try to put the basis of our discussion on theories established by some experiments. Thus, we focus our attention on thermodynamical and statistical-mechanical basis of the black hole thermodynamics: Let us assume that the Bekenstein-Hawking entropy is given by the Boltzmann formula applied to the underlying theory of quantum gravity. Under this assumption, the conditions justifying Boltzmann formula together with uniqueness of Bekenstein-Hawking entropy imply a reasonable universal property of quantum gravity. The universal property indicates a repulsive gravity at Planck length scale, otherwise stationary black holes can not be regarded as thermal equilibrium states of gravity. Further, in semi-classical level, we discuss a possible correction of Einstein equat...

  11. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  12. On the relation between implied and realized volatility indices: Evidence from the BRIC countries

    Science.gov (United States)

    Bentes, Sónia R.

    2017-09-01

    This paper investigates the relation between implied (IV) and realized volatility (RV). Using monthly data from the BRIC countries, we assess the informational content of IV in explaining future RV as well as its unbiasedness and efficiency. We employ an ADL (Autoregressive Distributed Lag) and the corresponding EC (Error Correction) model and compare the results with the ones obtained from the OLS regression. Our goal is to assess the fully dynamical relations between these variables and to separate the short from the long-run effects. We found different results for the informational content of IV according to the methodologies used. However, both methods show that IV is an unbiased estimate of RV for India and that IV was not found to be efficient in any of the BRIC countries. Further, EC results reveal the presence of short and long-run effects for India, whereas Russia exhibits only short-run adjustments.

  13. Hollows on Mercury: Bright-haloed depressions imply recent endogenic activity

    Science.gov (United States)

    Blewett, D. T.; Fontanella, N. R.; Peel, S. E.; Zhong, E. D.; Pashai, P.; Chabot, N. L.; Denevi, B. W.; Ernst, C. M.; Izenberg, N. R.; Murchie, S. L.; Xiao, Z.; Braden, S.; Baker, D. M.; Hurwitz, D. M.; Head, J. W.; McCoy, T. J.; Nittler, L. R.; Solomon, S. C.

    2011-12-01

    The MESSENGER spacecraft began orbital observations of Mercury in March 2011. The Mercury Dual Imaging System is acquiring global monochrome and multispectral image maps. Complementing the global maps are special targeted observations with resolutions as good as 10 m/pixel for monochrome and 80 m/pixel for multispectral images. These high-resolution morphology and color images reveal an unusual landform on Mercury, characterized by small (tens of meters to a few kilometers), fresh-appearing, irregularly shaped, shallow, rimless depressions, often occurring in clusters and in association with high-reflectance materials. The features ("hollows") are commonly found on the central peaks, floors, walls, and rims of impact craters or basins, implying a link to material brought near the surface from depth during crater formation. Hollows occur in both rayed (Kuiperian) craters as well as older degraded craters. They have been identified over a range of latitudes (approximately 54 deg. S to 66 deg. N) and at longitudes for which images with adequate spatial resolution and appropriate illumination and viewing conditions have been collected. The hollows are found in locations known from prior flyby observations to have characteristic high reflectance and a shallow slope of spectral reflectance versus wavelength relative to the global average. The most likely formation mechanisms for the hollows involve recent loss of volatiles through some combination of sublimation, sputtering, outgassing, or pyroclastic volcanism. A hollow found on the south-facing inner wall of a crater at a high northern latitude suggests a correlation with peak diurnal temperatures. The involvement of volatiles in formation mechanisms for the hollows fits with growing evidence that Mercury's interior contains higher abundances of volatile materials than predicted by most scenarios for the formation of the Solar System's innermost planet. Mercury is a small rocky-metal world whose internal geological

  14. Optimal Plant Carbon Allocation Implies a Biological Control on Nitrogen Availability

    Science.gov (United States)

    Prentice, I. C.; Stocker, B. D.

    2015-12-01

    The degree to which nitrogen availability limits the terrestrial C sink under rising CO2 is a key uncertainty in carbon cycle and climate change projections. Results from ecosystem manipulation studies and meta-analyses suggest that plant C allocation to roots adjusts dynamically under varying degrees of nitrogen availability and other soil fertility parameters. In addition, the ratio of biomass production to GPP appears to decline under nutrient scarcity. This reflects increasing plant C exudation into the soil (Cex) with decreasing nutrient availability. Cex is consumed by an array of soil organisms and may imply an improvement of nutrient availability to the plant. Thus, N availability is under biological control, but incurs a C cost. In spite of clear observational support, this concept is left unaccounted for in Earth system models. We develop a model for the coupled cycles of C and N in terrestrial ecosystems to explore optimal plant C allocation under rising CO2 and its implications for the ecosystem C balance. The model follows a balanced growth approach, accounting for the trade-offs between leaf versus root growth and Cex in balancing C fixation and N uptake. We assume that Cex is proportional to root mass, and that the ratio of N uptake (Nup) to Cex is proportional to inorganic N concentration in the soil solution. We further assume that Cex is consumed by N2-fixing processes if the ratio of Nup:Cex falls below the inverse of the C cost of N2-fixation. Our analysis thereby accounts for the feedbacks between ecosystem C and N cycling and stoichiometry. We address the question of how the plant C economy will adjust under rising atmospheric CO2 and what this implies for the ecosystem C balance and the degree of N limitation.

  15. Explaining the level of credit spreads: Option-implied jump risk premia in a firm value model

    NARCIS (Netherlands)

    Cremers, K.J.M.; Driessen, J.; Maenhout, P.

    2008-01-01

    We study whether option-implied jump risk premia can explain the high observed level of credit spreads. We use a structural jump-diffusion firm value model to assess the level of credit spreads generated by option-implied jump risk premia. Prices and returns of equity index and individual options

  16. 32 CFR 701.120 - Processing requests that cite or imply PA, Freedom of Information (FOIA), or PA/FOIA.

    Science.gov (United States)

    2010-07-01

    ... Privacy Program § 701.120 Processing requests that cite or imply PA, Freedom of Information (FOIA), or PA... maximum release of information allowed under the Acts. (d) Processing time limits. DON activities shall... 32 National Defense 5 2010-07-01 2010-07-01 false Processing requests that cite or imply...

  17. Distributed Collision-free Protocol for AGVs in Industrial Environments

    CERN Document Server

    Marino, Dario; Pallottino, Lucia

    2011-01-01

    In this paper, we propose a decentralized coordina- tion algorithm for safe and efficient management of a group of mobile robots following predefined paths in a dynamic industrial environment. The proposed algorithm is based on a shared resources protocol and a replanning strategy. It is proved to guarantee ordered traffic flows avoiding collisions, deadlocks (stall situations) and livelock (agents move without reaching final destinations). Mutual access to resources has been proved for the proposed approach while condition on the maximum number of AGVs is given to ensure the absence of deadlocks during system evolutions. Finally conditions to verify a local livelocks will also be proposed. In consistency with the model of distributed robotic systems (DRS), no centralized mechanism, synchronized clock, shared memory or ground support is needed. A local inter-robot communication, based on sign-boards, is considered among a small number of spatially adjacent robotic units.

  18. Robot collision-free path planning utilizing gauge function

    Institute of Scientific and Technical Information of China (English)

    朱向阳; 朱利民; 钟秉林

    1997-01-01

    Based on the generalized gauge function, a numerical criterion which specifies the topological rela-tionship between convex polyhedra is presented. It can be applied to detecting the overlap, just contact or separation between two sets of convex polyhedra. As the solution of a linear programming problem, the value of this criterion can be calculated easily. The presented criterion is available to provide heuristic information for generating intermediate configuration point as well as checking the hypothesized path for admissibility in flexible-trajectory path planning ap-proach.

  19. Effect of HIV-1 Subtype C integrase mutations implied using molecular modeling and docking data

    Science.gov (United States)

    Sachithanandham, Jaiprasath; Konda Reddy, Karnati; Solomon, King; David, Shoba; Kumar Singh, Sanjeev; Vadhini Ramalingam, Veena; Alexander Pulimood, Susanne; Cherian Abraham, Ooriyapadickal; Rupali, Pricilla; Sridharan, Gopalan; Kannangai, Rajesh

    2016-01-01

    The degree of sequence variation in HIV-1 integrase genes among infected patients and their impact on clinical response to Anti retroviral therapy (ART) is of interest. Therefore, we collected plasma samples from 161 HIV-1 infected individuals for subsequent integrase gene amplification (1087 bp). Thus, 102 complete integrase gene sequences identified as HIV-1 subtype-C was assembled. This sequence data was further used for sequence analysis and multiple sequence alignment (MSA) to assess position specific frequency of mutations within pol gene among infected individuals. We also used biophysical geometric optimization technique based molecular modeling and docking (Schrodinger suite) methods to infer differential function caused by position specific sequence mutations towards improved inhibitor selection. We thus identified accessory mutations (usually reduce susceptibility) leading to the resistance of some known integrase inhibitors in 14% of sequences in this data set. The Stanford HIV-1 drug resistance database provided complementary information on integrase resistance mutations to deduce molecular basis for such observation. Modeling and docking analysis show reduced binding by mutants for known compounds. The predicted binding values further reduced for models with combination of mutations among subtype C clinical strains. Thus, the molecular basis implied for the consequence of mutations in different variants of integrase genes of HIV-1 subtype C clinical strains from South India is reported. This data finds utility in the design, modification and development of a representative yet an improved inhibitor for HIV-1 integrase.

  20. Radiative transfer in CO2-rich atmospheres: 1. Collisional line mixing implies a colder early Mars

    Science.gov (United States)

    Ozak, N.; Aharonson, O.; Halevy, I.

    2016-06-01

    Fast and accurate radiative transfer methods are essential for modeling CO2-rich atmospheres, relevant to the climate of early Earth and Mars, present-day Venus, and some exoplanets. Although such models already exist, their accuracy may be improved as better theoretical and experimental constraints become available. Here we develop a unidimensional radiative transfer code for CO2-rich atmospheres, using the correlated k approach and with a focus on modeling early Mars. Our model differs from existing models in that it includes the effects of CO2 collisional line mixing in the calculation of the line-by-line absorption coefficients. Inclusion of these effects results in model atmospheres that are more transparent to infrared radiation and, therefore, in colder surface temperatures at radiative-convective equilibrium, compared with results of previous studies. Inclusion of water vapor in the model atmosphere results in negligible warming due to the low atmospheric temperatures under a weaker early Sun, which translate into climatically unimportant concentrations of water vapor. Overall, the results imply that sustained warmth on early Mars would not have been possible with an atmosphere containing only CO2 and water vapor, suggesting that other components of the early Martian climate system are missing from current models or that warm conditions were not long lived.

  1. Does a Strong El Niño Imply a Higher Predictability of Extreme Drought?

    Science.gov (United States)

    Wang, Shanshan; Yuan, Xing; Li, Yaohui

    2017-01-01

    The devastating North China drought in the summer of 2015 was roughly captured by a dynamical seasonal climate forecast model with a good prediction of the 2015/16 big El Niño. This raises a question of whether strong El Niños imply higher predictability of extreme droughts. Here we show that a strong El Niño does not necessarily result in an extreme drought, but it depends on whether the El Niño evolves synergistically with Eurasian spring snow cover reduction to trigger a positive summer Eurasian teleconnection (EU) pattern that favors anomalous northerly and air sinking over North China. The dynamical forecast model that only well represents the El Niño underpredicts the drought severity, while a dynamical-statistical forecasting approach that combines both the low- and high-latitudes precursors is more skillful at long lead. In a warming future, the vanishing cryosphere should be better understood to improve predictability of extreme droughts.

  2. Active shortening within the Himalayan orogenic wedge implied by the 2015 Gorkha earthquake

    Science.gov (United States)

    Whipple, Kelin X.; Shirzaei, Manoochehr; Hodges, Kip V.; Ramon Arrowsmith, J.

    2016-09-01

    Models of Himalayan neotectonics generally attribute active mountain building to slip on the Himalayan Sole Thrust, also termed the Main Himalayan Thrust, which accommodates underthrusting of the Indian Plate beneath Tibet. However, the geometry of the Himalayan Sole Thrust and thus how slip along it causes uplift of the High Himalaya are unclear. We show that the geodetic record of the 2015 Gorkha earthquake sequence significantly clarifies the architecture of the Himalayan Sole Thrust and suggests the need for revision of the canonical view of how the Himalaya grow. Inversion of Gorkha surface deformation reveals that the Himalayan Sole Thrust extends as a planar gently dipping fault surface at least 20-30 km north of the topographic front of the High Himalaya. This geometry implies that building of the high range cannot be attributed solely to slip along the Himalayan Sole Thrust over a steep ramp; instead, shortening within the Himalayan wedge is required to support the topography and maintain rapid rock uplift. Indeed, the earthquake sequence may have included a moderate rupture (Mw 6.9) on an out-of-sequence thrust fault at the foot of the High Himalaya. Such internal deformation is an expected response to sustained, focused rapid erosion, and may be common to most compressional orogens.

  3. Bound on the variation in the fine structure constant implied by Oklo data

    CERN Document Server

    Hamdan, Leila

    2015-01-01

    Dynamical models of dark energy can imply that the fine structure constant $\\alpha$ varies over cosmological time scales. Data on shifts in resonance energies $E_r$ from the Oklo natural fission reactor have been used to place restrictive bounds on the change in $\\alpha$ over the last 1.8 billion years. We review the uncertainties in these analyses, focussing on corrections to the standard estimate of $k_\\alpha\\!=\\!\\alpha\\,dE_r/d\\alpha$ due to Damour and Dyson. Guided, in part, by the best practice for assessing systematic errors in theoretical estimates spelt out by Dobaczewski et al. [in J. Phys. G: Nucl. Part. Phys. 41, 074001 (2014)], we compute these corrections in a variety of models tuned to reproduce existing nuclear data. Although the net correction is uncertain to within a factor of 2 or 3, it constitutes at most no more than 25% of the Damour-Dyson estimate of $k_\\alpha$. Making similar allowances for the uncertainties in the modeling of the operation of the Oklo reactors, we conclude that the rela...

  4. Severe environmental effects of Chicxulub impact imply key role in end-Cretaceous mass extinction

    Science.gov (United States)

    Brugger, Julia; Feulner, Georg; Petri, Stefan

    2017-04-01

    66 million years ago, during the most recent of the five severe mass extinctions in Earth's history, non-avian dinosaurs and many other organisms became extinct. The cause of this end-Cretaceous mass extinction is seen in either flood-basalt eruptions or an asteroid impact. Modeling the climatic changes after the Chicxulub asteroid impact allow to assess its contribution to the extinction event and to analyze the short-term and long-term response of the climate and the biosphere to the impact. Existing studies either investigated the effect of dust, which is now believed to play a minor role, or used one-dimensional, non-coupled models. In contrast, we use a coupled climate model to explore the longer lasting cooling due to sulfate aerosols. Based on data from geophysical impact modeling, we set up simulations with different stratospheric residence times for sulfate aerosols. Depending on this residence time, global surface air temperature decreased by at least 26°C, with 3 to 16 years subfreezing temperatures and a recovery time larger than 30 years. Vigorous ocean mixing, caused by the fast cooling of the surface ocean, might have perturbed marine ecosystems by the upwelling of nutrients. The dramatic climatic changes seen in our simulations imply severe environmental effects and therefore a significant contribution of the impact in the end-Cretaceous mass extinction.

  5. The LIM class homeobox gene lim5: implied role in CNS patterning in Xenopus and zebrafish.

    Science.gov (United States)

    Toyama, R; Curtiss, P E; Otani, H; Kimura, M; Dawid, I B; Taira, M

    1995-08-01

    LIM homeobox genes are characterized by encoding proteins in which two cysteine-rich LIM domains are associated with a homeodomain. We report the isolation of a gene, named Xlim-5 in Xenopus and lim5 in the zebrafish, that is highly similar in sequence but quite distinct in expression pattern from the previously described Xlim-1/lim1 gene. In both species studied the lim5 gene is expressed in the entire ectoderm in the early gastrula embryo. The Xlim-5 gene is activated in a cell autonomous manner in ectodermal cells, and this activation is suppressed by the mesoderm inducer activin. During neurulation, expression of the lim5 gene in both the frog and fish embryo is rapidly restricted to an anterior region in the developing neural plate/keel. In the 2-day Xenopus and 24-hr zebrafish embryo, this region becomes more sharply defined, forming a strongly lim5-expressing domain in the diencephalon anterior to the midbrain-forebrain boundary. In addition, regions of less intense lim5 expression are seen in the zebrafish embryo in parts of the telencephalon, in the anterior diencephalon coincident with the postoptic commissure, and in restricted regions of the midbrain, hindbrain, and spinal cord. Expression in ventral forebrain is abolished from the 5-somite stage onward in cyclops mutant fish. These results imply a role for lim5 in the patterning of the nervous system, in particular in the early specification of the diencephalon.

  6. Microlens OGLE-2005-BLG-169 Implies Cool Neptune-Like Planets are Common

    CERN Document Server

    Gould, A; Anderson, J; Bennett, D P; Bode, M F; Bond, I A; Botzler, C S; Bramich, D M; Burgdorf, M J; Christie, G W; De Poy, D L; Dong, S; Gaudi, B S; Han, C; Horne, K; Kubiak, M; Mao, S; McCormick, J; Paczynski, B; Park, B G; Pietrzynski, G; Pogge, R W; Poindexter, S; Rattenbury, N J; Snodgrass, C; Soszynski, I; Stanek, K Z; Steele, I A; Swaving, S C; Szewczyk, O; Szymanski, M K; Udalski, A; Ulaczyk, K; Wyrzykowski, L; Yock, P C M; Zhou, A Y

    2006-01-01

    We detect a Neptune mass-ratio (q~8e-5) planetary companion to the lens star in the extremely high-magnification (A~800) microlensing event OGLE-2005-BLG-169. If the parent is a main-sequence star, it has mass M~0.5 M_sun implying a planet mass of ~13 M_earth and projected separation of ~2.7 AU. When intensely monitored over their peak, high-magnification events similar to OGLE-2005-BLG-169 have nearly complete sensitivity to Neptune mass-ratio planets with projected separations of 0.6 to 1.6 Einstein radii, corresponding to 1.6--4.3 AU in the present case. Only two other such events were monitored well enough to detect Neptunes, and so this detection by itself suggests that Neptune mass-ratio planets are common. Moreover, another Neptune was recently discovered at a similar distance from its parent star in a low-magnification event, which are more common but are individually much less sensitive to planets. Combining the two detections yields 90% upper and lower frequency limits f=0.37^{+0.30}_{-0.21} over ju...

  7. Large-scale subduction of continental crust implied by India-Asia mass-balance calculation

    Science.gov (United States)

    Ingalls, Miquela; Rowley, David B.; Currie, Brian; Colman, Albert S.

    2016-11-01

    Continental crust is buoyant compared with its oceanic counterpart and resists subduction into the mantle. When two continents collide, the mass balance for the continental crust is therefore assumed to be maintained. Here we use estimates of pre-collisional crustal thickness and convergence history derived from plate kinematic models to calculate the crustal mass balance in the India-Asia collisional system. Using the current best estimates for the timing of the diachronous onset of collision between India and Eurasia, we find that about 50% of the pre-collisional continental crustal mass cannot be accounted for in the crustal reservoir preserved at Earth's surface today--represented by the mass preserved in the thickened crust that makes up the Himalaya, Tibet and much of adjacent Asia, as well as southeast Asian tectonic escape and exported eroded sediments. This implies large-scale subduction of continental crust during the collision, with a mass equivalent to about 15% of the total oceanic crustal subduction flux since 56 million years ago. We suggest that similar contamination of the mantle by direct input of radiogenic continental crustal materials during past continent-continent collisions is reflected in some ocean crust and ocean island basalt geochemistry. The subduction of continental crust may therefore contribute significantly to the evolution of mantle geochemistry.

  8. The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex.

    Directory of Open Access Journals (Sweden)

    Joel Z Leibo

    2015-10-01

    Full Text Available Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system's optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA. Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions.

  9. 基于空间伸缩结构的参数可控的混沌Hash函数%The chaotic hash function based on spatial expansion construction with controllable parameters

    Institute of Scientific and Technical Information of China (English)

    廖东; 王小敏; 张家树; 张文芳

    2012-01-01

    A novel chaotic one-way hash function based on spatial expansion construction with controllable parameter is presented which combines with the advantages of both chaotic system and parallel hash function. In the proposed approach, the hash model of message block is determined by chaotic dynamic parameter. The new method improves the security of hash function and avoids degrading the system performance at the same time. Theoretical and experimental results show that the proposed method has high performance in parallel algorithm, nearly uniform distribution and desired diffusion and confusion properties.%结合并行Hash函数和多混沌的设计思想,提出了一种基于空间伸缩结构的参数可控的混沌Hash函数构造方法.该方法结合空间结构的伸缩特性,使用动态密钥控制消息在空间的“膨胀一收缩一置乱”方式,有效地提高了系统的混乱和扩散特性,同时使用空间并行结构在提高Hash函数安全性的同时也提高了系统的执行效率.研究结果表明:新算法的并行计算速度快,且产生的Hash序列满足均匀分布,具有更为理想的混淆与扩散特性.

  10. HASH Index on Non-primary Key Columns of EMS Real-time Database Based on Shared Memory%基于共享内存的能量管理系统实时库非主键HASH索引

    Institute of Scientific and Technical Information of China (English)

    王瑾; 彭晖; 侯勇

    2011-01-01

    实时库是能量管理系统的核心之一,大部分实时数据的处理基于实时库.引入索引能够极大地优化实时库查找操作,提高实时库性能.文中介绍了HASH索引的查找算法和实现方式,设计了针对“父找子”型关系查找的双溢出型HASH索引,并介绍了其数据结构和查找算法.分析数据表明,双溢出索引适合于“父找子”型关系的查找,具有很高的查找效率.%The real-time database is one of the cores of the energy management system (EMS), on which is based most of the real-time data processing. Introducing the index will greatly optimize the real-time database searching operation to improve its performance. The searching algorithm is described alongside the realization of the HASH index, the HASH index with double overflow areas for,"parent-child" type searching is designed and its data structure and search algorithm are treated. It is shown that the HASH index with double overflow areas is a suitable index fox "parent-child" type searching with high search efficiency.

  11. 基于Hash散列的SCPI命令解析机制在LXI仪器上的实现%Implementation of SCPI Parsing Mechanism Based on Hash in LXI instrument

    Institute of Scientific and Technical Information of China (English)

    李智; 秦昌明; 张活

    2012-01-01

    在ARM & Linux嵌入式平台下,完成LXI仪器程控命令的发送.通过单向散列函数MPQ和Fibonacci函数构建分配较为均匀的散列表;采用链式的开散列结构,多值校验的冲突解决机制有效减小了冲突发生的可能;给出了SCPI命令解析中关键字节点和参数解析的实现方法;最后给出了一个LXI数字化仪命令解析的例子.该命令解析机制有较小的时间复杂度、高效的解析效率和较好的通用性.%On ARM & Linux embedded platform, sending remote control commands were implemented. MPQ and Fibonacci one-way hash functions were applied to construct hash tablet where elements are distributed uniformly. Open hashing structure based on linked list and multi-value calibration collision resolution policy can efficiently reduce collision possibility. The keyword nodes and parameters parsing methods were presented, including an example of LXI digitizer command parsing. This command parsing mechanism has small time complexity, high parsing efficiency and good versatility.

  12. An Improved Perceptual Hashing Algorithm of Image Based on Discrete Cosine Transform and Speeded Up Robust Features%一种由DCT和SURF改进的图像感知哈希算法

    Institute of Scientific and Technical Information of China (English)

    丁旭; 何建忠

    2014-01-01

    In order to solve the problem of poor robustness of global features and high complexity of local features in image perceptual hashing algorithm,the author proposed an improved perceptual hashing algorithm based on DCT and SURF.Using DCT as global fea tures and SURF as local features,the author gave the hashing functions and the fusion of two features.Then,the author provided the application in image authentication.Experimental results showed that this algorithm has good robustness and efficiency.%针对感知哈希技术中图像全局特征鲁棒性低和局部特征算法复杂度高的特点,提出一种由离散余弦变换(DCT)和SURF算子改进的感知哈希算法.本文以DCT为全局特征,以SURF描述子为局部特征,分别给出了两者的哈希编码算法及两者的融合方式,接着给出在图像认证时的算法流程.实验表明本文算法具有较好的鲁棒性和实时性.

  13. Upper-ocean-to-atmosphere radiocarbon offsets imply fast deglacial carbon dioxide release.

    Science.gov (United States)

    Rose, Kathryn A; Sikes, Elisabeth L; Guilderson, Thomas P; Shane, Phil; Hill, Tessa M; Zahn, Rainer; Spero, Howard J

    2010-08-26

    Radiocarbon in the atmosphere is regulated largely by ocean circulation, which controls the sequestration of carbon dioxide (CO(2)) in the deep sea through atmosphere-ocean carbon exchange. During the last glaciation, lower atmospheric CO(2) levels were accompanied by increased atmospheric radiocarbon concentrations that have been attributed to greater storage of CO(2) in a poorly ventilated abyssal ocean. The end of the ice age was marked by a rapid increase in atmospheric CO(2) concentrations that coincided with reduced (14)C/(12)C ratios (Delta(14)C) in the atmosphere, suggesting the release of very 'old' ((14)C-depleted) CO(2) from the deep ocean to the atmosphere. Here we present radiocarbon records of surface and intermediate-depth waters from two sediment cores in the southwest Pacific and Southern oceans. We find a steady 170 per mil decrease in Delta(14)C that precedes and roughly equals in magnitude the decrease in the atmospheric radiocarbon signal during the early stages of the glacial-interglacial climatic transition. The atmospheric decrease in the radiocarbon signal coincides with regionally intensified upwelling and marine biological productivity, suggesting that CO(2) released by means of deep water upwelling in the Southern Ocean lost most of its original depleted-(14)C imprint as a result of exchange and isotopic equilibration with the atmosphere. Our data imply that the deglacial (14)C depletion previously identified in the eastern tropical North Pacific must have involved contributions from sources other than the previously suggested carbon release by way of a deep Southern Ocean pathway, and may reflect the expanded influence of the (14)C-depleted North Pacific carbon reservoir across this interval. Accordingly, shallow water masses advecting north across the South Pacific in the early deglaciation had little or no residual (14)C-depleted signals owing to degassing of CO(2) and biological uptake in the Southern Ocean.

  14. Suppressed MMP-9 Activity in Myocardial Infarction-Related Cardiogenic Shock Implies Diminished Rage Degradation.

    Science.gov (United States)

    Selejan, Simina-Ramona; Hewera, Lisa; Hohl, Matthias; Kazakov, Andrey; Ewen, Sebastian; Kindermann, Ingrid; Böhm, Michael; Link, Andreas

    2017-07-01

    Receptor for advanced glycation end products (RAGE) and its cleavage fragment soluble RAGE (sRAGE) are opposite players in inflammation. Enhanced monocytic RAGE expression and decreased plasma sRAGE levels are associated with higher mortality in infarction-related cardiogenic shock. Active matrix metalloproteinase-9 (MMP-9) has been implied in RAGE ectodomain cleavage and subsequently sRAGE shedding in vitro. We investigated MMP-9 activity in myocardial infarction-induced cardiogenic shock with regard to RAGE/sRAGE regulation. We determined MMP-9 serum activity by zymography and tissue inhibitor of matrix metalloproteinases (TIMP-1) expression by Western blot and correlated it to RAGE/sRAGE data in patients with cardiogenic shock after acute myocardial infarction (CS, n = 30), in patients with acute myocardial infarction without shock (AMI, n = 20) and in healthy volunteers (n = 20).MMP-9 activity is increased in AMI (P = 0.02 versus controls), but significantly decreased in CS with lowest levels in non-survivors (n = 13, P = 0.02 versus AMI). In all patients, MMP-9 activity correlated inversely with RAGE expression on circulating monocytes (r = -0.57; P = 0.0001; n = 50).TIMP-1 levels showed an inverse regulation in comparison to active MMP-9 with significantly decreased levels in AMI as compared with controls (P = 0.02 versus controls) and highest levels in non-survivors of CS (P RAGE-induced deleterious inflammation in cardiogenic shock.

  15. Early members of 'living fossil' lineage imply later origin of modern ray-finned fishes.

    Science.gov (United States)

    Giles, Sam; Xu, Guang-Hui; Near, Thomas J; Friedman, Matt

    2017-08-30

    Modern ray-finned fishes (Actinopterygii) comprise half of extant vertebrate species and are widely thought to have originated before or near the end of the Middle Devonian epoch (around 385 million years ago). Polypterids (bichirs and ropefish) represent the earliest-diverging lineage of living actinopterygians, with almost all Palaeozoic taxa interpreted as more closely related to other extant actinopterygians than to polypterids. By contrast, the earliest material assigned to the polypterid lineage is mid-Cretaceous in age (around 100 million years old), implying a quarter-of-a-billion-year palaeontological gap. Here we show that scanilepiforms, a widely distributed radiation from the Triassic period (around 252-201 million years ago), are stem polypterids. Importantly, these fossils break the long polypterid branch and expose many supposedly primitive features of extant polypterids as reversals. This shifts numerous Palaeozoic ray-fins to the actinopterygian stem, reducing the minimum age for the crown lineage by roughly 45 million years. Recalibration of molecular clocks to exclude phylogenetically reassigned Palaeozoic taxa results in estimates that the actinopterygian crown lineage is about 20-40 million years younger than was indicated by previous molecular analyses. These new dates are broadly consistent with our revised palaeontological timescale and coincident with an interval of conspicuous morphological and taxonomic diversification among ray-fins centred on the Devonian-Carboniferous boundary. A shifting timescale, combined with ambiguity in the relationships of late Palaeozoic actinopterygians, highlights this part of the fossil record as a major frontier in understanding the evolutionary assembly of modern vertebrate diversity.

  16. A Message-digest Model with Salt and Hidden Feature of Hash%一种Hash特征隐藏的加盐信息摘要模型

    Institute of Scientific and Technical Information of China (English)

    祝彦斌; 王春玲

    2013-01-01

    Introduce the situation of typical Hash functions,such as MD5,SHA-1,etc. ,and analyze the reason why they keep higher epi-demic and the existing problem for continuing to use them alone. To solve the security and usability problem that message-digest service faces in reality,it designs a message-digest model with salting technique,encryption technique and some basic message-digest algo-rithms. And it systematically discusses the working principle and the design and implementation details of the model,realizes a prototype in OpenSSL environment,and simulates the practical mechanism in network communication. The results show that the model can hide characteristics of basic Hash functions and produce digests that have stronger randomness and resistance of collision. Finally,the salt form in the model is discussed combining with the way of message-digest applied in various scenes.%  介绍MD5和SHA-1等典型哈希函数现状,分析它们保持较高流行性的原因以及继续单独使用存在的问题。为解决现实信息服务所面临的安全性和可用性问题,使用加盐技术、密码技术和若干基本信息算法,设计一种加盐信息模型。系统论述模型工作原理、设计和实现细节。在OpenSSL环境下实现一个原型,并且模拟它在网络通信中实用机制。结果显示,该模型可以隐藏基本Hash函数特征,产生更具随机性和抗碰撞性的。最后,结合信息在不同场景应用方式讨论该模型的盐值形式。

  17. 混合散列连接算法随机I/O消除①%Towards Eliminating Random I/O in Hybrid Hash Joins

    Institute of Scientific and Technical Information of China (English)

    刘明超; 杨良怀; 周为钢

    2013-01-01

    HHJ is one of the mostly used core join algorithms for query processing in a database management system. This paper proposes a buffer-optimized hybrid hash join algorithm(OHHJ) by optimizing the bucket buffer to reduce the random I/O in hash join, i.e., to minimize the random I/O by optimizing the bucket buffer size in partition phase. By quantitatively analyzing the relationship between the bucket size, bucket buffer size, available memory size, relation size and random I/O access characteristics of hard disk, we have derived the heuristics for allocating the optimal bucket and bucket buffer sizes. The experimental results demonstrate that OHHJ can effectively reduce random I/O in HHJ during partition phase, and thus enhance the performance of the algorithm.%  混合散列连接算法(HHJ)是数据库管理系统查询处理中一种重要的连接算法.本文提出通过缓存优化来减少随机 I/O 的缓存优化混合散列连接算法(OHHJ),即通过合理优化分区阶段桶缓存的大小来尽量减少分区过程中产生的随机I/O.文章通过对分区(桶)大小、桶缓存大小、可用缓存大小、关系表大小与硬盘随机I/O访问特性之间的关系进行定量分析,得出桶大小以及桶缓存大小最优分配的启发式.实验结果表明OHHJ可以较好地减少传统HHJ算法分区阶段产生的随机I/O,提升了算法性能.

  18. The Role of Implied Volatility in Forecasting Future Realized Volatility and Jumps in Foreign Exchange, Stock and Bond Markets

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper

    , and to the separate forecasting of the realized components. We also introduce a new vector HAR (VecHAR) model for the resulting simultaneous system, controlling for possible endogeneity issues in the forecasting equations. We show that implied volatility contains incremental information about future volatility......We study the forecasting of future realized volatility in the stock, bond, and foreign exchange markets, as well as the continuous sample path and jump components of this, from variables in the information set, including implied volatility backed out from option prices. Recent nonparametric...... statistical techniques are used to separate realized volatility into its continuous and jump components, thus enhancing forecasting performance as shown by Andersen et al. (2005). We generalize the heterogeneous autoregressive (HAR) model to include implied volatility as an additional regressor...

  19. On the Sensitivity of Atmospheric Model Implied Ocean Heat Transport to the Dominant Terms of the Surface Energy Balance

    Energy Technology Data Exchange (ETDEWEB)

    Gleckler, P J

    2004-11-03

    The oceanic meridional heat transport (T{sub o}) implied by an atmospheric General Circulation Model (GCM) can help evaluate a model's readiness for coupling with an ocean GCM. In this study we examine the T{sub o} from benchmark experiments of the Atmospheric Model Intercomparison Project, and evaluate the sensitivity of T{sub o} to the dominant terms of the surface energy balance. The implied global ocean TO in the Southern Hemisphere of many models is equatorward, contrary to most observationally-based estimates. By constructing a hybrid (model corrected by observations) T{sub o}, an earlier study demonstrated that the implied heat transport is critically sensitive to the simulated shortwave cloud radiative effects, which have been argued to be principally responsible for the Southern Hemisphere problem. Systematic evaluation of one model in a later study suggested that the implied T{sub o} could be equally as sensitive to a model's ocean surface latent heat flux. In this study we revisit the problem with more recent simulations, making use of estimates of ocean surface fluxes to construct two additional hybrid calculations. The results of the present study demonstrate that indeed the implied T{sub o} of an atmospheric model is very sensitive to problems in not only the surface net shortwave, but the latent heat flux as well. Many models underestimate the shortwave radiation reaching the surface in the low latitudes, and overestimate the latent heat flux in the same region. The additional hybrid transport calculations introduced here could become useful model diagnostic tests as estimates of implied ocean surface fluxes are improved.

  20. The Role of Implied Volatility in Forecasting Future Realized Volatility and Jumps in Foreign Exchange, Stock and Bond Markets

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper

    We study the forecasting of future realized volatility in the stock, bond, and foreign exchange markets, as well as the continuous sample path and jump components of this, from variables in the information set, including implied volatility backed out from option prices. Recent nonparametric...... statistical techniques are used to separate realized volatility into its continuous and jump components, thus enhancing forecasting performance as shown by Andersen et al. (2005). We generalize the heterogeneous autoregressive (HAR) model to include implied volatility as an additional regressor...

  1. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    Science.gov (United States)

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  2. The PMC (Preparata, Metze and Chien) System Level Fault Model: Maximality Properties of the Implied Faulty Sets.

    Science.gov (United States)

    1983-05-15

    which no two modules test each other, and the number of faulty modules is smalL In this peper , we show that the implied faulty sets of one-step v...test outcoies Le., an outcome aij for emh (i J) in TID is called a on*~. The diagnosis problem cosists in partitioning S nto th set Gs of non faulty

  3. The Role of Implied Motion in Engaging Audiences for Health Promotion: Encouraging Naps on a College Campus

    Science.gov (United States)

    Mackert, Michael; Lazard, Allison; Guadagno, Marie; Hughes Wagner, Jessica

    2014-01-01

    Objective: Lack of sleep among college students negatively impacts health and academic outcomes. Building on research that implied motion imagery increases brain activity, this project tested visual design strategies to increase viewers' engagement with a health communication campaign promoting napping to improve sleep habits. Participants:…

  4. The Language of Love?--Verbal versus Implied Consent at First Heterosexual Intercourse: Implications for Contraceptive Use

    Science.gov (United States)

    Higgins, Jenny A.; Trussell, James; Moore, Nelwyn B.; Davidson, J. Kenneth, Sr.

    2010-01-01

    Background: Little is known about how young people communicate about initiating intercourse. Purpose: This study was designed to gauge the prevalence of implied versus verbal consent at first intercourse in a U.S. college population, assess effects of consent type on contraceptive use, and explore the influences of gender, race and other factors.…

  5. 16 CFR 303.40 - Use of terms in written advertisements that imply presence of a fiber.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Use of terms in written advertisements that... IDENTIFICATION ACT § 303.40 Use of terms in written advertisements that imply presence of a fiber. The use of terms in written advertisements, including advertisements disseminated through the Internet and...

  6. A bijection which implies Melzer's polynomial identities the $\\chi${1,1}^{(p,p+1)} case

    CERN Document Server

    Foda, O E; Foda, O; Warnaar, S O

    1995-01-01

    We obtain a bijection between certain lattice paths and partitions. This implies a proof of polynomial identities conjectured by Melzer. In a limit, these identities reduce to Rogers--Ramanujan-type identities for the \\chi_{1,1}^{(p,p+1)}(q) Virasoro characters, conjectured by the Stony Brook group.

  7. 76 FR 60489 - Lynn E. Stevenson; Notice of Termination of License by Implied Surrender and Soliciting Comments...

    Science.gov (United States)

    2011-09-29

    ... Energy Regulatory Commission Lynn E. Stevenson; Notice of Termination of License by Implied Surrender and... No.: 8866-009. c. Date Initiated: September 23, 2011 (notice date). d. Licensee: Lynn E. Stevenson. e... Lynn E. Stevenson.\\1\\ The project has not operated since a downstream landslide in 1993 and has...

  8. 散列式面状注记自动配置技术研究%Auto-Labeling of Hash Anea Features

    Institute of Scientific and Technical Information of China (English)

    张志军; 李霖; 于忠海; 应申

    2011-01-01

    We focus on how to automatically place annotation for hash area features and proposes a new method with reference to the compilation specifications for “cartographic symbols for national fundamental scale maps” and Gestalt psychology theory.At first, the candidate positions are generated with convex-grid Method.Then, they are evacuated with the Gestalt factors on the annotation.Finally, the global optimum location of annotation is determined by “conflict” rule.Utilizing the peripheral zone of features is a notable merit of this method.The method has been successfully used in producing topographic maps at 1 ∶ 50 000 scale.%参照国家标准图式规范,结合格式塔心理学,总结了散列式面状注记的配置规则,提出了一种新的自动配置方法.首先,用凸包格网法计算注记的候选位置;然后,用影响注记位置的格式塔因子对候选位置进行质量评价;最后,依据冲突规则得到全局最优解.该方法有效地利用了散列式面状要素周边的区域,扩充了注记的可调节性,并被成功地应用于1:5万地形图的数字制图中.

  9. 证书吊销的线索二叉排序Hash树解决方案%Threaded Binary Sorted Hash Trees Solution Scheme for Certificate Revocation Problem

    Institute of Scientific and Technical Information of China (English)

    王尚平; 张亚玲; 王育民

    2001-01-01

    提出了公钥基础设施(public key infrastructure,简称PKI)中证书吊销问题的一个新的解决方案--线索二叉排序Hash树(certificate revocation threaded binary sorted hash tree,简称CRTBSHT)解决方案.目前关于证书吊销问题的主要解决方案有X.509证书系统的证书吊销列表(certificate revocation list,简称CRL)、Micali的证书吊销系统(certificate revocation system,简称CRS)、Kocher的证书吊销树(certificate revocation tree,简称CRT)及Naor-Nissm的2-3证书吊销树(2-3CRT),这些方案均不完善.在CRT系统思想的基础上,利用线索化二叉排序树及Hash树给出的新方案,既继承了CRT证明一个证书的状态(是否被吊销)不需要整个线索二叉树,而只与其中部分相关路径有关的优点,又克服了CRT在更新时几乎需要对整个树重新构造的缺点,新方案在更新时仅需计算相关部分路径的数值.新方案对工程实现具有一定的参考价值.

  10. Dynamic Estimation of Volatility Risk Premia and Investor Risk Aversion from Option-Implied and Realized Volatilities

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Gibson, Michael; Zhou, Hao

    This paper proposes a method for constructing a volatility risk premium, or investor risk aversion, index. The method is intuitive and simple to implement, relying on the sample moments of the recently popularized model-free realized and option-implied volatility measures. A small-scale Monte Car...... relate to a set of macro-finance state variables. We also find that the extracted volatility risk premium helps predict future stock market returns....

  11. Dynamic Estimation of Volatility Risk Premia and Investor Risk Aversion from Option-Implied and Realized Volatilities

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Gibson, Michael; Zhou, Hao

    This paper proposes a method for constructing a volatility risk premium, or investor risk aversion, index. The method is intuitive and simple to implement, relying on the sample moments of the recently popularized model-free realized and option-implied volatility measures. A small-scale Monte Car...... relate to a set of macro-finance state variables. We also find that the extracted volatility risk premium helps predict future stock market returns.......This paper proposes a method for constructing a volatility risk premium, or investor risk aversion, index. The method is intuitive and simple to implement, relying on the sample moments of the recently popularized model-free realized and option-implied volatility measures. A small-scale Monte Carlo...... experiment confirms that the procedure works well in practice. Implementing the procedure with actual S&P500 option-implied volatilities and high-frequency five-minute-based realized volatilities indicates significant temporal dependencies in the estimated stochastic volatility risk premium, which we in turn...

  12. Does experience imply learning?

    NARCIS (Netherlands)

    Anand, Jaideep; Mulotte, Louis; Ren, Charlotte R.

    Research traditionally uses experiential learning arguments to explain the existence of a positive relationship between repetition of an activity and performance. We propose an additional interpretation of this relationship in the context of discrete corporate development activities. We argue that

  13. Efficient 4-round zero-knowledge proof system for NP

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A 4-round zero-knowledge interactive proof system for NP (Non-deterministic Polynomial) is presented when assuming the existence of one-way permutations and collision-free hash functions. This construction is more efficient than the original construction of 5-round zero-knowledge proof system for NP. The critical tools used in this paper are: zap, hash-based commitment scheme and non-interactive zero-knowledge.

  14. 人转录因子hASH4的表达纯化及其DNA结合活性%Expression,Purification and DNA Binding Activity of Human Transcription Factor hASH4

    Institute of Scientific and Technical Information of China (English)

    苏琢磊; 楼田甜; 王远东; 季朝能

    2014-01-01

    hASH4 is a member of Helix-Loop-Helix(HLH)proteins which are an important group of transcription factors that exert such a determinative influence on a variety of cell proliferation, determination and differentiation from yeast to human. hASH4 has been reported closely related to skin differentiation and development, but the exact mechanism is unknown. In this study, the expression plasmid of pET28b-his-hASH4 was restructured and successfully expressed in BL21(DE3). After the optimization of temperature, time, IPTG concentration of expression, we ascertain that 1mmol/L IPTG expressed 4 hours at 37℃can get the best expression. and we got the electrophoretic purity of the target protein by Ni-NTA affinity chromatography and ion cation exchange chromatography. The non-radioactive EMSA experiment between DNA and protein showed that the hASH4 protein only has the non-sepcific DNA binding activity without specific DNA binding activity. The play of transcription factors by hASH4 in the body may be need to form a heterodimer or multimer to further specific binding to DNA and act on the downstream genes. This study provided clues for the really function in vivo of hASH4 and laid the foundation for the further crystallization conditions screening, structural analysis and functional studies.%hASH4蛋白所属的HLH转录因子家族在调节基因表达,调控细胞周期,决定细胞分化中起了重要作用。有研究表明hASH4蛋白可能与皮肤的分化发育有着密切的关系,但具体机制不明。成功构建了pET28b-hASH4表达质粒,并在大肠杆菌BL21(DE3)中诱导表达。经过对温度、时间、IPTG浓度等表达条件的优化,确定在37℃下1 mmol/L IPTG诱导表达4 h可达到最佳表达效果,并通过亲和层析和弱阳离子交换层析纯化蛋白,得到了电泳纯的目的蛋白。通过非放射性凝胶滞留试验发现hASH4蛋白单体只具有非特异性的DNA结合活性,而不具有特异性的DNA结

  15. Small-Maturity Asymptotics for the At-The-Money Implied Volatility Slope in Lévy Models

    Science.gov (United States)

    Gerhold, Stefan; Gülüm, I. Cetin; Pinter, Arpad

    2016-01-01

    ABSTRACT We consider the at-the-money (ATM) strike derivative of implied volatility as the maturity tends to zero. Our main results quantify the behaviour of the slope for infinite activity exponential Lévy models including a Brownian component. As auxiliary results, we obtain asymptotic expansions of short maturity ATM digital call options, using Mellin transform asymptotics. Finally, we discuss when the ATM slope is consistent with the steepness of the smile wings, as given by Lee’s moment formula. PMID:27660537

  16. Packaging signals in two single-stranded RNA viruses imply a conserved assembly mechanism and geometry of the packaged genome.

    Science.gov (United States)

    Dykeman, Eric C; Stockley, Peter G; Twarock, Reidun

    2013-09-09

    The current paradigm for assembly of single-stranded RNA viruses is based on a mechanism involving non-sequence-specific packaging of genomic RNA driven by electrostatic interactions. Recent experiments, however, provide compelling evidence for sequence specificity in this process both in vitro and in vivo. The existence of multiple RNA packaging signals (PSs) within viral genomes has been proposed, which facilitates assembly by binding coat proteins in such a way that they promote the protein-protein contacts needed to build the capsid. The binding energy from these interactions enables the confinement or compaction of the genomic RNAs. Identifying the nature of such PSs is crucial for a full understanding of assembly, which is an as yet untapped potential drug target for this important class of pathogens. Here, for two related bacterial viruses, we determine the sequences and locations of their PSs using Hamiltonian paths, a concept from graph theory, in combination with bioinformatics and structural studies. Their PSs have a common secondary structure motif but distinct consensus sequences and positions within the respective genomes. Despite these differences, the distributions of PSs in both viruses imply defined conformations for the packaged RNA genomes in contact with the protein shell in the capsid, consistent with a recent asymmetric structure determination of the MS2 virion. The PS distributions identified moreover imply a preferred, evolutionarily conserved assembly pathway with respect to the RNA sequence with potentially profound implications for other single-stranded RNA viruses known to have RNA PSs, including many animal and human pathogens.

  17. Estimating the implied cost of carbon in future scenarios using a CGE model: The Case of Colorado

    Energy Technology Data Exchange (ETDEWEB)

    Hannum, Christopher; Cutler, Harvey; Iverson, Terrence; Keyser, David

    2017-03-01

    Using Colorado as a case study, we develop a state-level computable general equilibrium (CGE) model that reflects the roles of coal, natural gas, wind, solar, and hydroelectricity in supplying electricity. We focus on the economic impact of implementing Colorado's existing Renewable Portfolio Standard, updated in 2013. This requires that 25% of state generation come from qualifying renewable sources by 2020. We evaluate the policy under a variety of assumptions regarding wind integration costs and assumptions on the persistence of federal subsidies for wind. Specifically, we estimate the implied price of carbon as the carbon price at which a state-level policy would pass a state-level cost-benefit analysis, taking account of estimated greenhouse gas emission reductions and ancillary benefits from corresponding reductions in criteria pollutants. Our findings suggest that without the Production Tax Credit (federal aid), the state policy of mandating renewable power generation (RPS) is costly to state actors, with an implied cost of carbon of about $17 per ton of CO2 with a 3% discount rate. Federal aid makes the decision between natural gas and wind nearly cost neutral for Colorado.

  18. Video hash learning based on feature fusion and Manhattan quantization%基于特征融合和曼哈顿量化的视频哈希学习方法

    Institute of Scientific and Technical Information of China (English)

    聂秀山; 王舒婷; 尹义龙

    2016-01-01

    With the development of computer and multimedia technologies,video storage,transmission and retrieval are facing a huge challenge in the Internet especially the mobile Internet,due to the complex structure and high dimension of the video.Video hash learning is one of the important ways to solve the challenge,and it becomes one of the hot topics in the field of multimedia processing.As known,the existing methods generate video hashes using different types of features.In fact,there are potential relationships among different types of video features. Therefore,to make full use of the relationships among different video features and overcome the limitations of traditional video hashing methods,we proposed a video hash learning method based on feature fusion and Manhattan quantization in this paper.In the proposed method,the global,local and temporal features are firstly extracted from the video content,and the video clip is considered as a third-order tensor.Then,the tensor decomposition,which is popularly applied in multi-dimensional data processing,is used to fuse the global,local and temporal features.The three low-order tensors are obtained after tensor decomposition,and we concatenate them as the fusion representation of video content.Subsequently,the fused video feature is quantified by Manhattan quantization to get the video hash codes,which are used to construct the final video hash.Compared with the traditional video hashing methods,the proposed method not only makes full use the relationship among different video features,but also achieves the goal of coding with different dimensions respectively,which can well preserve the structural similarity among different video features.Two kinds of experiments are conducted to evaluate the performance of the proposed method,and the results show that the proposed method has a good performance compared with the existing methods.%当前信息时代,随着计算机和多媒体技术的发展,在互联网尤其是移

  19. 适用于移动云计算的抗中间人攻击的SSP方案%Hash-Based Secure Simple Pairing for Preventing Man-in-the-Middle Attacks in Mobile Cloud Co mputing

    Institute of Scientific and Technical Information of China (English)

    陈凯; 许海铭; 徐震; 林东岱; 刘勇

    2016-01-01

    低功率蓝牙(BLE)专为资源受限的设备设计,但现有的研究已经指出其安全简单配对方案(SSP)存在中间人攻击(MITM)漏洞。文章指出造成MITM漏洞的根本原因是:配对信息被篡改以及JW模式自身的漏洞。为此文章中提出了两个适用于移动云计算(MCC)中BLE设备的SSP改进方案,所提出的方案基于哈希函数并利用MCC技术提高SSP的安全性。方案1适用于支持PE或者OOB模式的BLE设备,其利用哈希函数确保配对信息的真实性、可靠性。方案2通过哈希序列来解决仅支持JW模式的BLE设备的MITM攻击漏洞。文章分别从安全角度和性能角度对所提出的方案进行分析,以表明方案在不同级别敌手的攻击下可以提供MITM攻击防护能力。%Bluetooth low energy (BLE)is designed for the devices with computational and power limitations.But it has been confirmed that Secure Simple Pairing (SSP)is vulnerable to the MITM attack.We identify the root causes of the problem:the pairing messages being tampered,and the vulnerability of the JW model.In this paper,we propose two hash-based SSP schemes for the devices in Mobile Cloud Computing (MCC).The proposed schemes enhance the SSP security with the help of MCC.Scheme I is applied into the devices which support the PE or OOB model.It uses the hash function to ensure the authenticity and integrity of the pairing messages.Scheme II is suitable for the devices which only support the JW model.It improves the security of the JW model through using the hash array.At the end of this paper,we examine the per-formance for the proposed schemes,and perform the security analysis to show that they can provide the MITM protection a-gainst the adversaries with different levels of power.

  20. 基于哈希规则的分布式文件系统的设计与实现%Design and Implementation of Distributed File System Based on the Hash Rule

    Institute of Scientific and Technical Information of China (English)

    段翰聪; 梅玫; 李林

    2013-01-01

    随着全球数据量猛增,分布式文件系统逐渐成为存储技术的研究重点.文章设计与实现了基于哈希规则的分布式文件系统,它是一个可扩展的集群文件系统,并提供了一个具有两层结构哈希规则,规则为客户端提供了文件定位服务,其伪随机性也将文件均匀的分布到了不同的服务器中,起到了一定的负载均衡的作用;系统中采用桶来管理文件元数据与文件数据,桶的独立性与灵活性使得桶便于迁移,增加了系统的可扩展性;系统中同时采用了Lazy机制来达成系统性能与文件副本可用性之间的平衡.文中首先介绍了系统的整体架构,然后描述了系统中采用的关键技术与系统典型流程,最后,对系统的性能进行的测试证明了其高效性.%With the increase of the amount of data around the whole world,distributed file system gradually becomes the key point of the storage technology research.The paper designed and implemented the distributed file system based on the hash rule.It is a extendible cluster file system,which provides a two-level structure hash-rule.The rule offers file location service for clients,and its pseudorandom feature makes file uniformly distributed on different servers.Thus the hash-rule plays a certain part in load balancing.Bucket is an entity that the file system uses to manage file data and metadata.The independence and the flexibility makes bucket easily to migrate,which increases the system scalability from another point of view.The system also uses the Lazy mechanism to achieve the balance between the performance and the duplication availability.We introduced the system architecture first.Then we described the key technique and the representative process of the system.The performance test of the file system in the last part proves its effectiveness.