WorldWideScience

Sample records for imply collision-free hash

  1. Incremental cryptography and security of public hash functions ...

    African Journals Online (AJOL)

    An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...

  2. Cache-Oblivious Hashing

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Wei, Zhewei; Yi, Ke

    2014-01-01

    The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t q =1...

  3. Welding Robot Collision-Free Path Optimization

    Directory of Open Access Journals (Sweden)

    Xuewu Wang

    2017-02-01

    Full Text Available Reasonable welding path has a significant impact on welding efficiency, and a collision-free path should be considered first in the process of welding robot path planning. The shortest path length is considered as an optimization objective, and obstacle avoidance is considered as the constraint condition in this paper. First, a grid method is used as a modeling method after the optimization objective is analyzed. For local collision-free path planning, an ant colony algorithm is selected as the search strategy. Then, to overcome the shortcomings of the ant colony algorithm, a secondary optimization is presented to improve the optimization performance. Finally, the particle swarm optimization algorithm is used to realize global path planning. Simulation results show that the desired welding path can be obtained based on the optimization strategy.

  4. Perceptual Audio Hashing Functions

    Directory of Open Access Journals (Sweden)

    Emin Anarım

    2005-07-01

    Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  5. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  6. Architecture-Conscious Hashing

    NARCIS (Netherlands)

    M. Zukowski (Marcin); S. Héman (Sándor); P.A. Boncz (Peter)

    2006-01-01

    textabstractHashing is one of the fundamental techniques used to implement query processing operators such as grouping, aggregation and join. This paper studies the interaction between modern computer architecture and hash-based query processing techniques. First, we focus on extracting maximum

  7. The Grindahl Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Rechberger, Christian; Thomsen, Søren Steffen

    2007-01-01

    to the state. We propose two concrete hash functions, Grindahl-256 and Grindahl-512 with claimed security levels with respect to collision, preimage and second preimage attacks of 2^128 and 2^256, respectively. Both proposals have lower memory requirements than other hash functions at comparable speeds...

  8. Hashing for Statistics over K-Partitions

    DEFF Research Database (Denmark)

    Dahlgaard, Soren; Knudsen, Mathias Baek Tejs; Rotenberg, Eva

    2015-01-01

    In this paper we analyze a hash function for k-partitioning a set into bins, obtaining strong concentration bounds for standard algorithms combining statistics from each bin. This generic method was originally introduced by Flajolet and Martin [FOCS'83] in order to save a factor Ω(k) of time per...... concentration bounds on the most popular applications of k-partitioning similar to those we would get using a truly random hash function. The analysis is very involved and implies several new results of independent interest for both simple and double tabulation, e.g. A simple and efficient construction...

  9. The hash function BLAKE

    CERN Document Server

    Aumasson, Jean-Philippe; Phan, Raphael; Henzen, Luca

    2014-01-01

    This is a comprehensive description of the cryptographic hash function BLAKE, one of the five final contenders in the NIST SHA3 competition, and of BLAKE2, an improved version popular among developers. It describes how BLAKE was designed and why BLAKE2 was developed, and it offers guidelines on implementing and using BLAKE, with a focus on software implementation.   In the first two chapters, the authors offer a short introduction to cryptographic hashing, the SHA3 competition, and BLAKE. They review applications of cryptographic hashing, they describe some basic notions such as security de

  10. Cryptographic quantum hashing

    Science.gov (United States)

    Ablayev, F. M.; Vasiliev, A. V.

    2014-02-01

    We present a version of quantum hash functions based on non-binary discrete functions. The proposed quantum procedure is ‘classical-quantum’, that is, it takes a classical bit string as an input and produces a quantum state. The resulting function has the property of a one-way function (pre-image resistance); in addition it has properties analogous to classical cryptographic hash second pre-image resistance and collision resistance. We also show that the proposed function can be naturally used in a quantum digital signature protocol.

  11. Cryptographic quantum hashing

    International Nuclear Information System (INIS)

    Ablayev, F M; Vasiliev, A V

    2014-01-01

    We present a version of quantum hash functions based on non-binary discrete functions. The proposed quantum procedure is ‘classical-quantum’, that is, it takes a classical bit string as an input and produces a quantum state. The resulting function has the property of a one-way function (pre-image resistance); in addition it has properties analogous to classical cryptographic hash second pre-image resistance and collision resistance. We also show that the proposed function can be naturally used in a quantum digital signature protocol. (letter)

  12. Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2010-01-01

    functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle...

  13. Proposals for Iterated Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  14. Proposals for iterated hash functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2006-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  15. FSH: fast spaced seed hashing exploiting adjacent hashes.

    Science.gov (United States)

    Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia

    2018-01-01

    Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.

  16. Spongent: A lightweight hash function

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knežević, Miroslav; Leander, Gregor

    2011-01-01

    This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations...

  17. On hash functions using checksums

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John; Knudsen, Lars Ramkilde

    2010-01-01

    We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimag...

  18. On hash functions using checksums

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John; Knudsen, Lars Ramkilde

    2008-01-01

    We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimag...

  19. Ranking Based Locality Sensitive Hashing Enabled Cancelable Biometrics: Index-of-Max Hashing

    OpenAIRE

    Jin, Zhe; Lai, Yen-Lung; Hwang, Jung-Yeon; Kim, Soohyung; Teoh, Andrew Beng Jin

    2017-01-01

    In this paper, we propose a ranking based locality sensitive hashing inspired two-factor cancelable biometrics, dubbed "Index-of-Max" (IoM) hashing for biometric template protection. With externally generated random parameters, IoM hashing transforms a real-valued biometric feature vector into discrete index (max ranked) hashed code. We demonstrate two realizations from IoM hashing notion, namely Gaussian Random Projection based and Uniformly Random Permutation based hashing schemes. The disc...

  20. DEVELOPMENT AND IMPLEMENTATION OF HASH FUNCTION FOR GENERATING HASHED MESSAGE

    Directory of Open Access Journals (Sweden)

    Amir Ghaeedi

    2016-09-01

    Full Text Available Steganography is a method of sending confidential information in a way that the existence of the channel in this communication remains secret. A collaborative approach between steganography and digital signature provides a high secure hidden data. Unfortunately, there are wide varieties of attacks that affect the quality of image steganography. Two issues that required to be addressed are large size of the ciphered data in digital signature and high bandwidth. The aim of the research is to propose a new method for producing a dynamic hashed message algorithm in digital signature and then embedded into image for enhancing robustness of image steganography with reduced bandwidth. A digital signature with smaller hash size than other hash algorithms was developed for authentication purposes. A hash function is used in the digital signature generation. The encoder function encoded the hashed message to generate the digital signature and then embedded into an image as a stego-image. In enhancing the robustness of the digital signature, we compressed or encoded it or performed both operations before embedding the data into the image. This encryption algorithm is also computationally efficient whereby for messages with the sizes less than 1600 bytes, the hashed file reduced the original file up to 8.51%.

  1. The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases

    Directory of Open Access Journals (Sweden)

    A.T. Akinwale

    2009-05-01

    Full Text Available In this work, attempt is made to select three good hash functions which uniformly distribute hash values that permute their internal states and allow the input bits to generate different output bits. These functions are used in different levels of hash tables that are coded in Java Programming Language and a quite number of data records serve as primary data for testing the performances. The result shows that the two-level hash tables with three different hash functions give a superior performance over one-level hash table with two hash functions or zero-level hash table with one function in term of reducing the conflict keys and quick lookup for a particular element. The result assists to reduce the complexity of join operation in query language from O( n2 to O( 1 by placing larger query result, if any, in multilevel hash tables with multiple hash functions and generate shorter query result.

  2. Forensic hash for multimedia information

    Science.gov (United States)

    Lu, Wenjun; Varna, Avinash L.; Wu, Min

    2010-01-01

    Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.

  3. Robust visual hashing via ICA

    International Nuclear Information System (INIS)

    Fournel, Thierry; Coltuc, Daniela

    2010-01-01

    Designed to maximize information transmission in the presence of noise, independent component analysis (ICA) could appear in certain circumstances as a statistics-based tool for robust visual hashing. Several ICA-based scenarios can attempt to reach this goal. A first one is here considered.

  4. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Science.gov (United States)

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  5. Collision-free gases in spatially homogeneous space-times

    International Nuclear Information System (INIS)

    Maartens, R.; Maharaj, S.D.

    1985-01-01

    The kinematical and dynamical properties of one-component collision-free gases in spatially homogeneous, locally rotationally symmetric (LRS) space-times are analyzed. Following Ray and Zimmerman [Nuovo Cimento B 42, 183 (1977)], it is assumed that the distribution function f of the gas inherits the symmetry of space-time, in order to construct solutions of Liouville's equation. The redundancy of their further assumption that f be based on Killing vector constants of the motion is shown. The Ray and Zimmerman results for Kantowski--Sachs space-time are extended to all spatially homogeneous LRS space-times. It is shown that in all these space-times the kinematic average four-velocity u/sup i/ can be tilted relative to the homogeneous hypersurfaces. This differs from the perfect fluid case, in which only one space-time admits tilted u/sup i/, as shown by King and Ellis [Commun. Math. Phys. 31, 209 (1973)]. As a consequence, it is shown that all space-times admit nonzero acceleration and heat flow, while a subclass admits nonzero vorticity. The stress π/sub i/j is proportional to the shear sigma/sub i/j by virtue of the invariance of the distribution function. The evolution of tilt and the existence of perfect fluid solutions is also discussed

  6. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Marwah Almasri

    2015-12-01

    Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  7. Structure Sensitive Hashing With Adaptive Product Quantization.

    Science.gov (United States)

    Liu, Xianglong; Du, Bowen; Deng, Cheng; Liu, Ming; Lang, Bo

    2016-10-01

    Hashing has been proved as an attractive solution to approximate nearest neighbor search, owing to its theoretical guarantee and computational efficiency. Though most of prior hashing algorithms can achieve low memory and computation consumption by pursuing compact hash codes, however, they are still far beyond the capability of learning discriminative hash functions from the data with complex inherent structure among them. To address this issue, in this paper, we propose a structure sensitive hashing based on cluster prototypes, which explicitly exploits both global and local structures. An alternating optimization algorithm, respectively, minimizing the quantization loss and spectral embedding loss, is presented to simultaneously discover the cluster prototypes for each hash function, and optimally assign unique binary codes to them satisfying the affinity alignment between them. For hash codes of a desired length, an adaptive bit assignment is further appended to the product quantization of the subspaces, approximating the Hamming distances and meanwhile balancing the variance among hash functions. Experimental results on four large-scale benchmarks CIFAR-10, NUS-WIDE, SIFT1M, and GIST1M demonstrate that our approach significantly outperforms state-of-the-art hashing methods in terms of semantic and metric neighbor search.

  8. Cryptanalysis of Tav-128 hash function

    DEFF Research Database (Denmark)

    Kumar, Ashish; Sanadhya, Somitra Kumar; Gauravaram, Praveen

    2010-01-01

    Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag authentic...

  9. Hash3: Proofs, Analysis and Implementation

    DEFF Research Database (Denmark)

    Gauravaram, Praveen

    2009-01-01

    This report outlines the talks presented at the winter school on Hash3: Proofs, Analysis, and Implementation, ECRYPT II Event on Hash Functions. In general, speakers may not write everything what they talk on the slides. So, this report also outlines such findings following the understanding of t...

  10. Fast and powerful hashing using tabulation

    DEFF Research Database (Denmark)

    Thorup, Mikkel

    2017-01-01

    Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical. Here, we survey recent results on how simple hashing schemes based on tabulation provide unexpectedly strong......, linear probing and Cuckoo hashing. Next, we consider twisted tabulation where one input character is "twisted" in a simple way. The resulting hash function has powerful distributional properties: Chernoffstyle tail bounds and a very small bias for minwise hashing. This is also yields an extremely fast...... pseudorandom number generator that is provably good for many classic randomized algorithms and data-structures. Finally, we consider double tabulation where we compose two simple tabulation functions, applying one to the output of the other, and show that this yields very high independence in the classic...

  11. A model of quantum communication device for quantum hashing

    International Nuclear Information System (INIS)

    Vasiliev, A

    2016-01-01

    In this paper we consider a model of quantum communications between classical computers aided with quantum processors, connected by a classical and a quantum channel. This type of communications implying both classical and quantum messages with moderate use of quantum processing is implicitly used in many quantum protocols, such as quantum key distribution or quantum digital signature. We show that using the model of a quantum processor on multiatomic ensembles in the common QED cavity we can speed up quantum hashing, which can be the basis of quantum digital signature and other communication protocols. (paper)

  12. Multiview alignment hashing for efficient image search.

    Science.gov (United States)

    Liu, Li; Yu, Mengyang; Shao, Ling

    2015-03-01

    Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.

  13. Robust hashing for 3D models

    Science.gov (United States)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  14. Hash function based on chaotic map lattices.

    Science.gov (United States)

    Wang, Shihong; Hu, Gang

    2007-06-01

    A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.

  15. Authenticated hash tables

    DEFF Research Database (Denmark)

    Triandopoulos, Nikolaos; Papamanthou, Charalampos; Tamassia, Roberto

    2008-01-01

    Hash tables are fundamental data structures that optimally answer membership queries. Suppose a client stores n elements in a hash table that is outsourced at a remote server so that the client can save space or achieve load balancing. Authenticating the hash table functionality, i.e., verifying...... to a scheme that achieves different trade-offs---namely, constant update time and O(nε/logκε n) query time for fixed ε > 0 and κ > 0. An experimental evaluation of our solution shows very good scalability....

  16. Cracking PwdHash: A Bruteforce Attack on Client-side Password Hashing

    OpenAIRE

    Llewellyn-Jones, David; Rymer, Graham Matthew

    2017-01-01

    PwdHash is a widely-used tool for client-side password hashing. Originally released as a browser extension, it replaces the user’s password with a hash that combines both the password and the website’s domain. As a result, while the user only remembers a single secret, the passwords received are all unique for each site. We demonstrate how the hashcat password recovery tool can be extended to allow passwords generated using PwdHash to be identified and recovered, revealing the user’s master p...

  17. The Concept of Collision-Free Motion Planning Using a Dynamic Collision Map

    Directory of Open Access Journals (Sweden)

    Keum-Bae Cho

    2014-09-01

    Full Text Available In this paper, we address a new method for the collision-free motion planning of a mobile robot in dynamic environments. The motion planner is based on the concept of a conventional collision map (CCM, represented on the L(travel length-T(time plane. We extend the CCM with dynamic information about obstacles, such as linear acceleration and angular velocity, providing useful information for estimating variation in the collision map. We first analyse the effect of the dynamic motion of an obstacle in the collision region. We then define the measure of collision dispersion (MOCD. The dynamic collision map (DCM is generated by drawing the MOCD on the CCM. To evaluate a collision-free motion planner using the DCM, we extend the DCM with MOCD, then draw the unreachable region and deadlocked regions. Finally, we construct a collision-free motion planner using the information from the extended DCM.

  18. Five Performance Enhancements for Hybrid Hash Join

    National Research Council Canada - National Science Library

    Graefe, Goetz

    1992-01-01

    .... We discuss five performance enhancements for hybrid hash join algorithms, namely data compression, large cluster sizes and multi-level recursion, role reversal of build and probe inputs, histogram...

  19. Compact binary hashing for music retrieval

    Science.gov (United States)

    Seo, Jin S.

    2014-03-01

    With the huge volume of music clips available for protection, browsing, and indexing, there is an increased attention to retrieve the information contents of the music archives. Music-similarity computation is an essential building block for browsing, retrieval, and indexing of digital music archives. In practice, as the number of songs available for searching and indexing is increased, so the storage cost in retrieval systems is becoming a serious problem. This paper deals with the storage problem by extending the supervector concept with the binary hashing. We utilize the similarity-preserving binary embedding in generating a hash code from the supervector of each music clip. Especially we compare the performance of the various binary hashing methods for music retrieval tasks on the widely-used genre dataset and the in-house singer dataset. Through the evaluation, we find an effective way of generating hash codes for music similarity estimation which improves the retrieval performance.

  20. Collision free path generation in 3D with turning and pitch radius constraints for aerial vehicles

    DEFF Research Database (Denmark)

    Schøler, F.; La Cour-Harbo, A.; Bisgaard, M.

    2009-01-01

    In this paper we consider the problem of trajectory generation in 3D for uninhabited aerial systems (UAS). The proposed algorithm for trajectory generation allows us to find a feasible collision-free 3D trajectory through a number of waypoints in an environment containing obstacles. Our approach...... assumes that most of the aircraft structural and dynamic limitations can be formulated as a turn radius constraint, and that any two consecutive waypoints have line-of-sight. The generated trajectories are collision free and also satisfy a constraint on the minimum admissible turning radius, while...

  1. Efficient hash tables for network applications.

    Science.gov (United States)

    Zink, Thomas; Waldvogel, Marcel

    2015-01-01

    Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.

  2. Efficient tabling of structured data with enhanced hash-consing

    DEFF Research Database (Denmark)

    Zhou, Neng-Fa; Have, Christian Theil

    2012-01-01

    techniques, called input sharing and hash code memoization, for reducing the time complexity by avoiding computing hash codes for certain terms. The improved system is able to eliminate the extra linear factor in the old system for processing sequences, thus significantly enhancing the scalability...... uses hash tables, but also systems that use tries such as XSB and YAP. In this paper, we apply hash-consing to tabling structured data in B-Prolog. While hash-consing can reduce the space consumption when sharing is effective, it does not change the time complexity. We enhance hash-consing with two...

  3. Collision-Free Structure Using Thin-Film Magnet For Electrostatic Energy Harvester

    International Nuclear Information System (INIS)

    Yoshii, S; Yamaguchi, K; Fujita, T; Kanda, K; Maenaka, K

    2016-01-01

    This paper proposes collision-free structure using NdFeB thin-film magnet for vibration energy harvesters. By using stripe shaped NdFeB magnet array on the Si MEMS structure, we finally obtained 3 mN of magnetic repulsive force on 8 × 8 mm 2 specimen with 40 μm air-gap. (paper)

  4. Collision-Free Structure Using Thin-Film Magnet For Electrostatic Energy Harvester

    Science.gov (United States)

    Yoshii, S.; Yamaguchi, K.; Fujita, T.; Kanda, K.; Maenaka, K.

    2016-11-01

    This paper proposes collision-free structure using NdFeB thin-film magnet for vibration energy harvesters. By using stripe shaped NdFeB magnet array on the Si MEMS structure, we finally obtained 3 mN of magnetic repulsive force on 8 × 8 mm2 specimen with 40 μm air-gap.

  5. Towards a Collision-Free WLAN: Dynamic Parameter Adjustment in CSMA/E2CA

    Directory of Open Access Journals (Sweden)

    Bellalta Boris

    2011-01-01

    Full Text Available Carrier sense multiple access with enhanced collision avoidance (CSMA/ECA is a distributed MAC protocol that allows collision-free access to the medium in WLANs. The only difference between CSMA/ECA and the well-known CSMA/CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This paper shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov chain theory is used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The paper also introduces CSMA/E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA/E2CA converges quicker to collision-free operation and delivers higher performance than CSMA/ECA, specially in harsh wireless scenarios with high frame-error rates. The last part of the paper addresses scenarios with a large number of contenders. We suggest dynamic parameter adjustment techniques to accommodate a varying (and potentially high number of contenders. The effectiveness of these adjustments in preventing collisions is validated by means of simulation.

  6. Building Modern GPU Brute-Force Collision Resistible Hash Algorithm

    Directory of Open Access Journals (Sweden)

    L. A. Nadeinsky

    2012-03-01

    Full Text Available The article considers methods of fixing storing passwords in hashed form security vulnerability. Suggested hashing algorithm is based on the specifics of architecture of modern graphics processors.

  7. Feature hashing for fast image retrieval

    Science.gov (United States)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  8. Cryptanalysis of Tav-128 hash function

    DEFF Research Database (Denmark)

    Many RFID protocols use cryptographic hash functions for their security. The resource constrained nature of RFID systems forces the use of light weight cryptographic algorithms. Tav-128 is one such 128-bit light weight hash function proposed by Peris-Lopez et al. for a low-cost RFID tag...... authentication protocol. Apart from some statistical tests for randomness by the designers themselves, Tav-128 has not undergone any other thorough security analysis. Based on these tests, the designers claimed that Tav-128 does not posses any trivial weaknesses. In this article, we carry out the first third...... party security analysis of Tav-128 and show that this hash function is neither collision resistant nor second preimage resistant. Firstly, we show a practical collision attack on Tav-128 having a complexity of 237 calls to the compression function and produce message pairs of arbitrary length which...

  9. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  10. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  11. Normalization for Implied Volatility

    OpenAIRE

    Fukasawa, Masaaki

    2010-01-01

    We study specific nonlinear transformations of the Black-Scholes implied volatility to show remarkable properties of the volatility surface. Model-free bounds on the implied volatility skew are given. Pricing formulas for the European options which are written in terms of the implied volatility are given. In particular, we prove elegant formulas for the fair strikes of the variance swap and the gamma swap.

  12. Large-Scale Unsupervised Hashing with Shared Structure Learning.

    Science.gov (United States)

    Liu, Xianglong; Mu, Yadong; Zhang, Danchen; Lang, Bo; Li, Xuelong

    2015-09-01

    Hashing methods are effective in generating compact binary signatures for images and videos. This paper addresses an important open issue in the literature, i.e., how to learn compact hash codes by enhancing the complementarity among different hash functions. Most of prior studies solve this problem either by adopting time-consuming sequential learning algorithms or by generating the hash functions which are subject to some deliberately-designed constraints (e.g., enforcing hash functions orthogonal to one another). We analyze the drawbacks of past works and propose a new solution to this problem. Our idea is to decompose the feature space into a subspace shared by all hash functions and its complementary subspace. On one hand, the shared subspace, corresponding to the common structure across different hash functions, conveys most relevant information for the hashing task. Similar to data de-noising, irrelevant information is explicitly suppressed during hash function generation. On the other hand, in case that the complementary subspace also contains useful information for specific hash functions, the final form of our proposed hashing scheme is a compromise between these two kinds of subspaces. To make hash functions not only preserve the local neighborhood structure but also capture the global cluster distribution of the whole data, an objective function incorporating spectral embedding loss, binary quantization loss, and shared subspace contribution is introduced to guide the hash function learning. We propose an efficient alternating optimization method to simultaneously learn both the shared structure and the hash functions. Experimental results on three well-known benchmarks CIFAR-10, NUS-WIDE, and a-TRECVID demonstrate that our approach significantly outperforms state-of-the-art hashing methods.

  13. A Collision-Free G2 Continuous Path-Smoothing Algorithm Using Quadratic Polynomial Interpolation

    Directory of Open Access Journals (Sweden)

    Seong-Ryong Chang

    2014-12-01

    Full Text Available Most path-planning algorithms are used to obtain a collision-free path without considering continuity. On the other hand, a continuous path is needed for stable movement. In this paper, the searched path was converted into a G2 continuous path using the modified quadratic polynomial and membership function interpolation algorithm. It is simple, unique and provides a good geometric interpretation. In addition, a collision-checking and improvement algorithm is proposed. The collision-checking algorithm can check the collisions of a smoothed path. If collisions are detected, the collision improvement algorithm modifies the collision path to a collision-free path. The collision improvement algorithm uses a geometric method. This method uses the perpendicular line between a collision position and the collision piecewise linear path. The sub-waypoint is added, and the QPMI algorithm is applied again. As a result, the collision-smoothed path is converted into a collision-free smooth path without changing the continuity.

  14. Security Analysis of Randomize-Hash-then-Sign Digital Signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2012-01-01

    At CRYPTO 2006, Halevi and Krawczyk proposed two randomized hash function modes and analyzed the security of digital signature algorithms based on these constructions. They showed that the security of signature schemes based on the two randomized hash function modes relies on properties similar...... functions, such as for the Davies-Meyer construction used in the popular hash functions such as MD5 designed by Rivest and the SHA family of hash functions designed by the National Security Agency (NSA), USA and published by NIST in the Federal Information Processing Standards (FIPS). We show an online...... 800-106. We discuss some important applications of our attacks and discuss their applicability on signature schemes based on hash functions with ‘built-in’ randomization. Finally, we compare our attacks on randomize-hash-then-sign schemes with the generic forgery attacks on the standard hash...

  15. Cryptanalysis of the LAKE Hash Family

    DEFF Research Database (Denmark)

    Biryukov, Alex; Gauravaram, Praveen; Guo, Jian

    2009-01-01

    We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show three different collision and near-collision attacks on the compression function. The first attac...

  16. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  17. Parallel keyed hash function construction based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Deng Shaojiang

    2008-01-01

    Recently, a variety of chaos-based hash functions have been proposed. Nevertheless, none of them works efficiently in parallel computing environment. In this Letter, an algorithm for parallel keyed hash function construction is proposed, whose structure can ensure the uniform sensitivity of hash value to the message. By means of the mechanism of both changeable-parameter and self-synchronization, the keystream establishes a close relation with the algorithm key, the content and the order of each message block. The entire message is modulated into the chaotic iteration orbit, and the coarse-graining trajectory is extracted as the hash value. Theoretical analysis and computer simulation indicate that the proposed algorithm can satisfy the performance requirements of hash function. It is simple, efficient, practicable, and reliable. These properties make it a good choice for hash on parallel computing platform

  18. Hashing in computer science fifty years of slicing and dicing

    CERN Document Server

    Konheim, Alan G

    2009-01-01

    Written by one of the developers of the technology, Hashing is both a historical document on the development of hashing and an analysis of the applications of hashing in a society increasingly concerned with security. The material in this book is based on courses taught by the author, and key points are reinforced in sample problems and an accompanying instructor s manual. Graduate students and researchers in mathematics, cryptography, and security will benefit from this overview of hashing and the complicated mathematics that it requires

  19. Chaos-based hash function (CBHF) for cryptographic applications

    International Nuclear Information System (INIS)

    Amin, Mohamed; Faragallah, Osama S.; Abd El-Latif, Ahmed A.

    2009-01-01

    As the core of cryptography, hash is the basic technique for information security. Many of the hash functions generate the message digest through a randomizing process of the original message. Subsequently, a chaos system also generates a random behavior, but at the same time a chaos system is completely deterministic. In this paper, an algorithm for one-way hash function construction based on chaos theory is introduced. Theoretical analysis and computer simulation indicate that the algorithm can satisfy all performance requirements of hash function in an efficient and flexible manner and secure against birthday attacks or meet-in-the-middle attacks, which is good choice for data integrity or authentication.

  20. Chaos-based hash function (CBHF) for cryptographic applications

    Energy Technology Data Exchange (ETDEWEB)

    Amin, Mohamed [Dept. of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511 (Egypt)], E-mail: mamin04@yahoo.com; Faragallah, Osama S. [Dept. of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952 (Egypt)], E-mail: osam_sal@yahoo.com; Abd El-Latif, Ahmed A. [Dept. of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shebin El-Koom 32511 (Egypt)], E-mail: ahmed_rahiem@yahoo.com

    2009-10-30

    As the core of cryptography, hash is the basic technique for information security. Many of the hash functions generate the message digest through a randomizing process of the original message. Subsequently, a chaos system also generates a random behavior, but at the same time a chaos system is completely deterministic. In this paper, an algorithm for one-way hash function construction based on chaos theory is introduced. Theoretical analysis and computer simulation indicate that the algorithm can satisfy all performance requirements of hash function in an efficient and flexible manner and secure against birthday attacks or meet-in-the-middle attacks, which is good choice for data integrity or authentication.

  1. Cryptanalysis of an Iterated Halving-based hash function: CRUSH

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Henricksen, Matt; Knudsen, Lars Ramkilde

    2009-01-01

    Iterated Halving has been suggested as a replacement to the Merkle–Damgård (MD) construction in 2004 anticipating the attacks on the MDx family of hash functions. The CRUSH hash function provides a specific instantiation of the block cipher for Iterated Halving. The authors identify structural pr...

  2. A Novel Perceptual Hash Algorithm for Multispectral Image Authentication

    Directory of Open Access Journals (Sweden)

    Kaimeng Ding

    2018-01-01

    Full Text Available The perceptual hash algorithm is a technique to authenticate the integrity of images. While a few scholars have worked on mono-spectral image perceptual hashing, there is limited research on multispectral image perceptual hashing. In this paper, we propose a perceptual hash algorithm for the content authentication of a multispectral remote sensing image based on the synthetic characteristics of each band: firstly, the multispectral remote sensing image is preprocessed with band clustering and grid partition; secondly, the edge feature of the band subsets is extracted by band fusion-based edge feature extraction; thirdly, the perceptual feature of the same region of the band subsets is compressed and normalized to generate the perceptual hash value. The authentication procedure is achieved via the normalized Hamming distance between the perceptual hash value of the recomputed perceptual hash value and the original hash value. The experiments indicated that our proposed algorithm is robust compared to content-preserved operations and it efficiently authenticates the integrity of multispectral remote sensing images.

  3. Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.

    Science.gov (United States)

    Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen

    2015-09-01

    With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.

  4. Distributed hash table theory, platforms and applications

    CERN Document Server

    Zhang, Hao; Xie, Haiyong; Yu, Nenghai

    2013-01-01

    This SpringerBrief summarizes the development of Distributed Hash Table in both academic and industrial fields. It covers the main theory, platforms and applications of this key part in distributed systems and applications, especially in large-scale distributed environments. The authors teach the principles of several popular DHT platforms that can solve practical problems such as load balance, multiple replicas, consistency and latency. They also propose DHT-based applications including multicast, anycast, distributed file systems, search, storage, content delivery network, file sharing and c

  5. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    Science.gov (United States)

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  6. Deep Constrained Siamese Hash Coding Network and Load-Balanced Locality-Sensitive Hashing for Near Duplicate Image Detection.

    Science.gov (United States)

    Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen

    2018-09-01

    We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.

  7. 76 FR 11433 - Federal Transition To Secure Hash Algorithm (SHA)-256

    Science.gov (United States)

    2011-03-02

    ... ADMINISTRATION [FAR-N-2011-01; Docket No. 2011-0083; Sequence 1] Federal Transition To Secure Hash Algorithm (SHA... acquisition community to transition to Secure Hash Algorithm SHA-256. SHA-256 is a cryptographic hash function... persons attending. Please cite ``Federal Transition to Secure Hash Algorithm SHA-256'' in all...

  8. Side channel analysis of some hash based MACs: A response to SHA-3 requirements

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2008-01-01

    The forthcoming NIST's Advanced Hash Standard (AHS) competition to select SHA-3 hash function requires that each candidate hash function submission must have at least one construction to support FIPS 198 HMAC application. As part of its evaluation, NIST is aiming to select either a candidate hash...

  9. Locality-sensitive Hashing without False Negatives

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    2016-01-01

    We consider a new construction of locality-sensitive hash functions for Hamming space that is covering in the sense that is it guaranteed to produce a collision for every pair of vectors within a given radius r. The construction is efficient in the sense that the expected number of hash collisions......(n)/k, where n is the number of points in the data set and k ∊ N, and differs from it by at most a factor ln(4) in the exponent for general values of cr. As a consequence, LSH-based similarity search in Hamming space can avoid the problem of false negatives at little or no cost in efficiency. Read More: http...... between vectors at distance cr, for a given c > 1, comes close to that of the best possible data independent LSH without the covering guarantee, namely, the seminal LSH construction of Indyk and Motwani (FOCS ′98). The efficiency of the new construction essentially matches their bound if cr = log...

  10. Quantum hashing is maximally secure against classical leakage

    OpenAIRE

    Huang, Cupjin; Shi, Yaoyun

    2017-01-01

    Cryptographic hash functions are fundamental primitives widely used in practice. For such a function $f:\\{0, 1\\}^n\\to\\{0, 1\\}^m$, it is nearly impossible for an adversary to produce the hash $f(x)$ without knowing the secret message $x\\in\\{0, 1\\}^n$. Unfortunately, all hash functions are vulnerable under the side-channel attack, which is a grave concern for information security in practice. This is because typically $m\\ll n$ and an adversary needs only $m$ bits of information to pass the veri...

  11. Data Collision Prevention with Overflow Hashing Technique in Closed Hash Searching Process

    Science.gov (United States)

    Rahim, Robbi; Nurjamiyah; Rafika Dewi, Arie

    2017-12-01

    Hash search is a method that can be used for various search processes such as search engines, sorting, machine learning, neural network and so on, in the search process the possibility of collision data can happen and to prevent the occurrence of collision can be done in several ways one of them is to use Overflow technique, the use of this technique perform with varying length of data and this technique can prevent the occurrence of data collisions.

  12. Handwriting: Feature Correlation Analysis for Biometric Hashes

    Science.gov (United States)

    Vielhauer, Claus; Steinmetz, Ralf

    2004-12-01

    In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  13. Handwriting: Feature Correlation Analysis for Biometric Hashes

    Directory of Open Access Journals (Sweden)

    Ralf Steinmetz

    2004-04-01

    Full Text Available In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation, the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  14. Implementation of 4-way Superscalar Hash MIPS Processor Using FPGA

    Science.gov (United States)

    Sahib Omran, Safaa; Fouad Jumma, Laith

    2018-05-01

    Due to the quick advancements in the personal communications systems and wireless communications, giving data security has turned into a more essential subject. This security idea turns into a more confounded subject when next-generation system requirements and constant calculation speed are considered in real-time. Hash functions are among the most essential cryptographic primitives and utilized as a part of the many fields of signature authentication and communication integrity. These functions are utilized to acquire a settled size unique fingerprint or hash value of an arbitrary length of message. In this paper, Secure Hash Algorithms (SHA) of types SHA-1, SHA-2 (SHA-224, SHA-256) and SHA-3 (BLAKE) are implemented on Field-Programmable Gate Array (FPGA) in a processor structure. The design is described and implemented using a hardware description language, namely VHSIC “Very High Speed Integrated Circuit” Hardware Description Language (VHDL). Since the logical operation of the hash types of (SHA-1, SHA-224, SHA-256 and SHA-3) are 32-bits, so a Superscalar Hash Microprocessor without Interlocked Pipelines (MIPS) processor are designed with only few instructions that were required in invoking the desired Hash algorithms, when the four types of hash algorithms executed sequentially using the designed processor, the total time required equal to approximately 342 us, with a throughput of 4.8 Mbps while the required to execute the same four hash algorithms using the designed four-way superscalar is reduced to 237 us with improved the throughput to 5.1 Mbps.

  15. FBC: a flat binary code scheme for fast Manhattan hash retrieval

    Science.gov (United States)

    Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun

    2018-04-01

    Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.

  16. Numerical simulation of collision-free plasma using Vlasov hybrid simulation

    International Nuclear Information System (INIS)

    Nunn, D.

    1990-01-01

    A novel scheme for the numerical simulation of wave particle interactions in space plasmas has been developed. The method, termed VHS or Vlasov Hybrid Simulation, is applicable to hot collision free plasmas in which the unperturbed distribution functions is smooth and free of delta function singularities. The particle population is described as a continuous Vlasov fluid in phase space-granularity and collisional effects being ignored. In traditional PIC/CIC codes the charge/current due to each simulation particle is assigned to a fixed spatial grid. In the VHS method the simulation particles sample the Vlasov fluid and provide information about the value of distribution function (F(r,v) at random points in phase space. Values of F are interpolated from the simulation particles onto a fixed grid in velocity/position or phase space. With distribution function defined on a phase space grid the plasma charge/current field is quickly calculated. The simulation particles serve only to provide information, and thus the particle population may be dynamic. Particles no longer resonant with the wavefield may be discarded from the simulation, and new particles may be inserted into the Vlasov fluid where required

  17. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  18. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    Science.gov (United States)

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  19. Contact discontinuities in a cold collision-free two-beam plasma

    Science.gov (United States)

    Kirkland, K. B.; Sonnerup, B. U. O.

    1982-01-01

    The structure of contact discontinuities in a collision-free plasma is examined using a model of a plasma which consists of two oppositely directed cold ion beams and a background of cold massless electrons such that exact charge neutrality is maintained and that the electric field is zero. The basic equations describing self-consistent equilibria are obtained for the more general situation where a net flow across the layer takes place and where the magnetic field has two nonzero tangential components but where the electric field remains zero. These equations are then specialized to the case of no net plasma flow where one of the tangential components is zero, and four different classes of sheets are obtained, all having thickness the order of the ion inertial length. The first class is for layers separating two identical plasma and magnetic field regions, the second is for an infinite array of parallel layers producing an undulated magnetic field, the third is for layers containing trapped ions in closed orbits which separate two vacuum regions with uniform identical magnetic fields, and the fourth is for layers which reflect a single plasma beam, leaving a vacuum with a reversed and compressed tangential field on the other side.

  20. Optic flow-based collision-free strategies: From insects to robots.

    Science.gov (United States)

    Serres, Julien R; Ruffier, Franck

    2017-09-01

    Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Contact discontinuities in a cold collision-free two-beam plasma

    International Nuclear Information System (INIS)

    Kirkland, K.B.; Sonnerup, B.U.O.

    1982-01-01

    A contact discontinuity in a collision-free magnetized plasma is a thin layer, possessing a nontrivial magnetic structure, across which no net plasma flow takes place (#betta#/sub n/ = 0) even though the magnetic field has a nonvanishing component (B/sub n/not =0) normal to it. This paper examines the structure of such discontinuities in a simple plasma model consisting of two oppositely directed cold ion beams and a background of cold massless electrons such that exact charge neutrality is maintained so that the electric field Eequivalent0. The basic equations describing self-consistent equilibria are developed for the more general situation where a net flow across the layer takes place (#betta#/sub n/ = 0) and where the magnetic field has two nonzero tangential components B/sub y/ and B/sub z/ but where E remains zero. These equations are then specialized to the case eta/sub n/equivalent0, B/sub z/equivalent0, and four different classes of sheets are obtained, all having thickness of the order of the ion inertial length: (1) layers separating two identical plasma and magnetic field regions. (2) an infinite array of parallel layers producing an undulated magnetic field, (3) layers containing trapped ions in closed orbits which separate two vacuum regions with uniform identical magnetic fields, and (4) layers which reflect a single plasma beam, leaving a vacuum with a revesed and compressed tangential field on the other side. Solutions for which #betta#/sub n/ = 0 but B/sub z/not =0 may also exist but have not been analyzed; rotational discontinuities are shown not to be possible in this model

  2. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    Science.gov (United States)

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  4. Final report for LDRD Project 93633 : new hash function for data protection.

    Energy Technology Data Exchange (ETDEWEB)

    Draelos, Timothy John; Dautenhahn, Nathan; Schroeppel, Richard Crabtree; Tolk, Keith Michael; Orman, Hilarie (PurpleStreak, Inc.); Walker, Andrea Mae; Malone, Sean; Lee, Eric; Neumann, William Douglas; Cordwell, William R.; Torgerson, Mark Dolan; Anderson, Eric; Lanzone, Andrew J.; Collins, Michael Joseph; McDonald, Timothy Scott; Caskey, Susan Adele

    2009-03-01

    The security of the widely-used cryptographic hash function SHA1 has been impugned. We have developed two replacement hash functions. The first, SHA1X, is a drop-in replacement for SHA1. The second, SANDstorm, has been submitted as a candidate to the NIST-sponsored SHA3 Hash Function competition.

  5. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  6. Hash function construction using weighted complex dynamical networks

    International Nuclear Information System (INIS)

    Song Yu-Rong; Jiang Guo-Ping

    2013-01-01

    A novel scheme to construct a hash function based on a weighted complex dynamical network (WCDN) generated from an original message is proposed in this paper. First, the original message is divided into blocks. Then, each block is divided into components, and the nodes and weighted edges are well defined from these components and their relations. Namely, the WCDN closely related to the original message is established. Furthermore, the node dynamics of the WCDN are chosen as a chaotic map. After chaotic iterations, quantization and exclusive-or operations, the fixed-length hash value is obtained. This scheme has the property that any tiny change in message can be diffused rapidly through the WCDN, leading to very different hash values. Analysis and simulation show that the scheme possesses good statistical properties, excellent confusion and diffusion, strong collision resistance and high efficiency. (general)

  7. Hash function based on piecewise nonlinear chaotic map

    International Nuclear Information System (INIS)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2009-01-01

    Chaos-based cryptography appeared recently in the early 1990s as an original application of nonlinear dynamics in the chaotic regime. In this paper, an algorithm for one-way hash function construction based on piecewise nonlinear chaotic map with a variant probability parameter is proposed. Also the proposed algorithm is an attempt to present a new chaotic hash function based on multithreaded programming. In this chaotic scheme, the message is connected to the chaotic map using probability parameter and other parameters of chaotic map such as control parameter and initial condition, so that the generated hash value is highly sensitive to the message. Simulation results indicate that the proposed algorithm presented several interesting features, such as high flexibility, good statistical properties, high key sensitivity and message sensitivity. These properties make the scheme a suitable choice for practical applications.

  8. A scalable lock-free hash table with open addressing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Puge; Karlsson, Sven

    2016-01-01

    and concurrent operations without any locks. In this paper, we present a new fully lock-free open addressed hash table with a simpler design than prior published work. We split hash table insertions into two atomic phases: first inserting a value ignoring other concurrent operations, then in the second phase......Concurrent data structures synchronized with locks do not scale well with the number of threads. As more scalable alternatives, concurrent data structures and algorithms based on widely available, however advanced, atomic operations have been proposed. These data structures allow for correct...

  9. The FPGA realization of the general cellular automata based cryptographic hash functions: Performance and effectiveness

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2014-01-01

    Full Text Available In the paper the author considers hardware implementation of the GRACE-H family general cellular automata based cryptographic hash functions. VHDL is used as a language and Altera FPGA as a platform for hardware implementation. Performance and effectiveness of the FPGA implementations of GRACE-H hash functions were compared with Keccak (SHA-3, SHA-256, BLAKE, Groestl, JH, Skein hash functions. According to the performed tests, performance of the hardware implementation of GRACE-H family hash functions significantly (up to 12 times exceeded performance of the hardware implementation of previously known hash functions, and effectiveness of that hardware implementation was also better (up to 4 times.

  10. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  11. SPONGENT: The Design Space of Lightweight Cryptographic Hashing

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knezevic, Miroslav; Leander, Gregor

    2013-01-01

    construction instantiated with present-type permutations. The resulting family of hash functions is called spongent. We propose 13 spongent variants--or different levels of collision and (second) preimage resistance as well as for various implementation constraints. For each of them, we provide several ASIC...

  12. Visual hashing of digital video : applications and techniques

    NARCIS (Netherlands)

    Oostveen, J.; Kalker, A.A.C.M.; Haitsma, J.A.; Tescher, A.G.

    2001-01-01

    his paper present the concept of robust video hashing as a tool for video identification. We present considerations and a technique for (i) extracting essential perceptual features from a moving image sequences and (ii) for identifying any sufficiently long unknown video segment by efficiently

  13. Practical Attacks on AES-like Cryptographic Hash Functions

    DEFF Research Database (Denmark)

    Kölbl, Stefan; Rechberger, Christian

    2015-01-01

    to drastically reduce the complexity of attacks to very practical values for reduced-round versions. Furthermore, we describe new and practical attacks on Whirlpool and the recently proposed GOST R hash function with one or more of the following properties: more rounds, less time/memory complexity, and more...

  14. The LabelHash algorithm for substructure matching

    Directory of Open Access Journals (Sweden)

    Bryant Drew H

    2010-11-01

    Full Text Available Abstract Background There is an increasing number of proteins with known structure but unknown function. Determining their function would have a significant impact on understanding diseases and designing new therapeutics. However, experimental protein function determination is expensive and very time-consuming. Computational methods can facilitate function determination by identifying proteins that have high structural and chemical similarity. Results We present LabelHash, a novel algorithm for matching substructural motifs to large collections of protein structures. The algorithm consists of two phases. In the first phase the proteins are preprocessed in a fashion that allows for instant lookup of partial matches to any motif. In the second phase, partial matches for a given motif are expanded to complete matches. The general applicability of the algorithm is demonstrated with three different case studies. First, we show that we can accurately identify members of the enolase superfamily with a single motif. Next, we demonstrate how LabelHash can complement SOIPPA, an algorithm for motif identification and pairwise substructure alignment. Finally, a large collection of Catalytic Site Atlas motifs is used to benchmark the performance of the algorithm. LabelHash runs very efficiently in parallel; matching a motif against all proteins in the 95% sequence identity filtered non-redundant Protein Data Bank typically takes no more than a few minutes. The LabelHash algorithm is available through a web server and as a suite of standalone programs at http://labelhash.kavrakilab.org. The output of the LabelHash algorithm can be further analyzed with Chimera through a plugin that we developed for this purpose. Conclusions LabelHash is an efficient, versatile algorithm for large-scale substructure matching. When LabelHash is running in parallel, motifs can typically be matched against the entire PDB on the order of minutes. The algorithm is able to identify

  15. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    Science.gov (United States)

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  16. Option-implied term structures

    OpenAIRE

    Vogt, Erik

    2014-01-01

    The illiquidity of long-maturity options has made it difficult to study the term structures of option spanning portfolios. This paper proposes a new estimation and inference framework for these option-implied term structures that addresses long-maturity illiquidity. By building a sieve estimator around the risk-neutral valuation equation, the framework theoretically justifies (fat-tailed) extrapolations beyond truncated strikes and between observed maturities while remaining nonparametric. Ne...

  17. One-way hash function construction based on the spatiotemporal chaotic system

    International Nuclear Information System (INIS)

    Luo Yu-Ling; Du Ming-Hui

    2012-01-01

    Based on the spatiotemporal chaotic system, a novel algorithm for constructing a one-way hash function is proposed and analysed. The message is divided into fixed length blocks. Each message block is processed by the hash compression function in parallel. The hash compression is constructed based on the spatiotemporal chaos. In each message block, the ASCII code and its position in the whole message block chain constitute the initial conditions and the key of the hash compression function. The final hash value is generated by further compressing the mixed result of all the hash compression values. Theoretic analyses and numerical simulations show that the proposed algorithm presents high sensitivity to the message and key, good statistical properties, and strong collision resistance. (general)

  18. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  19. Simultenious binary hash and features learning for image retrieval

    Science.gov (United States)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  20. Speaker Linking and Applications using Non-Parametric Hashing Methods

    Science.gov (United States)

    2016-09-08

    nonparametric estimate of a multivariate density function,” The Annals of Math- ematical Statistics , vol. 36, no. 3, pp. 1049–1051, 1965. [9] E. A. Patrick...Speaker Linking and Applications using Non-Parametric Hashing Methods† Douglas Sturim and William M. Campbell MIT Lincoln Laboratory, Lexington, MA...with many approaches [1, 2]. For this paper, we focus on using i-vectors [2], but the methods apply to any embedding. For the task of speaker QBE and

  1. Remarks on Gödel's Code as a Hash Function

    Czech Academy of Sciences Publication Activity Database

    Mikuš, M.; Savický, Petr

    2010-01-01

    Roč. 47, č. 3 (2010), s. 67-80 ISSN 1210-3195 R&D Projects: GA ČR GAP202/10/1333 Institutional research plan: CEZ:AV0Z10300504 Keywords : Gödel numbering function * hash function * rational reconstruction * integer relation algorithm Subject RIV: BA - General Mathematics http://www.sav.sk/journals/uploads/0317151904m-s.pdf

  2. The legal response to illegal "hash clubs" in Denmark

    DEFF Research Database (Denmark)

    Asmussen, V.; Moesby-Johansen, C.

    2004-01-01

    Fra midten af 1990'erne er der skudt en række hashklubber op i Danmark. Overordnet er der to slags klubber: salgssteder og væresteder. De første klubber er udelukkende organiseret om salget af hash, mens de andre er klubber, hvor man både kan købe hashen og opholde sig på stedet for at deltage i ...

  3. Improving the security of a parallel keyed hash function based on chaotic maps

    Energy Technology Data Exchange (ETDEWEB)

    Xiao Di, E-mail: xiaodi_cqu@hotmail.co [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Liao Xiaofeng [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China); Wang Yong [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)] [College of Economy and Management, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China)

    2009-11-23

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  4. Improving the security of a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wang Yong

    2009-01-01

    In this Letter, we analyze the cause of vulnerability of the original parallel keyed hash function based on chaotic maps in detail, and then propose the corresponding enhancement measures. Theoretical analysis and computer simulation indicate that the modified hash function is more secure than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function.

  5. Collision analysis of one kind of chaos-based hash function

    International Nuclear Information System (INIS)

    Xiao Di; Peng Wenbing; Liao Xiaofeng; Xiang Tao

    2010-01-01

    In the last decade, various chaos-based hash functions have been proposed. Nevertheless, the corresponding analyses of them lag far behind. In this Letter, we firstly take a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) as a sample to analyze its computational collision problem, and then generalize the construction method of one kind of chaos-based hash function and summarize some attentions to avoid the collision problem. It is beneficial to the hash function design based on chaos in the future.

  6. One-way hash function based on hyper-chaotic cellular neural network

    International Nuclear Information System (INIS)

    Yang Qunting; Gao Tiegang

    2008-01-01

    The design of an efficient one-way hash function with good performance is a hot spot in modern cryptography researches. In this paper, a hash function construction method based on cell neural network with hyper-chaos characteristics is proposed. First, the chaos sequence is gotten by iterating cellular neural network with Runge–Kutta algorithm, and then the chaos sequence is iterated with the message. The hash code is obtained through the corresponding transform of the latter chaos sequence. Simulation and analysis demonstrate that the new method has the merit of convenience, high sensitivity to initial values, good hash performance, especially the strong stability. (general)

  7. IMPLIED AUTHOR IN PHILOSOPHICAL NOVELS

    Directory of Open Access Journals (Sweden)

    Olga Senkāne

    2014-10-01

    Full Text Available The present article falls within a number of papers about research on specification of philosophical novels. The aim of this article is to analyze author’s function as a narrative category in classical philosophical novels (Franz Kafka "The Trial" (1925, "The Castle" (1926, Jean-Paul Sartre "Nausea" (1938, Hermann Hesse "The Glass Bead Game" (1943, Albert Camus "The Plague" (1947 and a novel of Latvian prose writer Ilze Šķipsna "Neapsolītās zemes" ["Un-Promised Lands"] (1970. The analysis is based on theoretical ideas of structural narratologists Gerard Genette, William Labov, Seymuor Chatman, Wolf Schmid, as well as philosophers Edmund Husserl, Jean-Paul Sartre, Paul Ricouer and semioticians Yuri Lotman (Юрий Лотман and Umberto Eco. The real author can ”enter” the text only indirectly—as an image, with the help of the storyteller, and the way how this ”entry” happens is determined by the narration of the real author or narrative (communication skills of the author. Thus, the author and implied author are functionally different concepts: author as a real person develops the concept idea, his intention is to define the concept under his original vision; narrator, in its turn, communicates with the reader, representing the concept, and his aim is to select appropriate means of communication with regard to reader’s perceptual abilities.

  8. Cryptanalysis of Lin et al.'s Efficient Block-Cipher-Based Hash Function

    NARCIS (Netherlands)

    Liu, Bozhong; Gong, Zheng; Chen, Xiaohong; Qiu, Weidong; Zheng, Dong

    2010-01-01

    Hash functions are widely used in authentication. In this paper, the security of Lin et al.'s efficient block-cipher-based hash function is reviewed. By using Joux's multicollisions and Kelsey et al.'s expandable message techniques, we find the scheme is vulnerable to collision, preimage and second

  9. Authentication codes from ε-ASU hash functions with partially secret keys

    NARCIS (Netherlands)

    Liu, S.L.; Tilborg, van H.C.A.; Weng, J.; Chen, Kefei

    2014-01-01

    An authentication code can be constructed with a family of e-Almost strong universal (e-ASU) hash functions, with the index of hash functions as the authentication key. This paper considers the performance of authentication codes from e-ASU, when the authentication key is only partially secret. We

  10. Range-efficient consistent sampling and locality-sensitive hashing for polygons

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Pagh, Rasmus

    2017-01-01

    Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces. The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity. We consider LSH for...... or union of a set of preprocessed polygons. Curiously, our consistent sampling method uses transformation to a geometric problem....

  11. Linear-XOR and Additive Checksums Don't Protect Damgard-Merkle Hashes

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Kelsey, John

    2008-01-01

    We consider the security of Damg\\aa{}rd-Merkle variants which compute linear-XOR or additive checksums over message blocks, intermediate hash values, or both, and process these checksums in computing the final hash value. We show that these Damg\\aa{}rd-Merkle variants gain almost no security...

  12. Cryptanalysis of the 10-Round Hash and Full Compression Function of SHAvite-3-512

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Leurent, Gaëtan; Mendel, Florian

    2010-01-01

    In this paper, we analyze SHAvite-3-512 hash function, as proposed for round 2 of the SHA-3 competition. We present cryptanalytic results on 10 out of 14 rounds of the hash function SHAvite-3-512, and on the full 14 round compression function of SHAvite-3-512. We show a second preimage attack on ...

  13. The suffix-free-prefix-free hash function construction and its indifferentiability security analysis

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Knudsen, Lars R.

    2012-01-01

    In this paper, we observe that in the seminal work on indifferentiability analysis of iterated hash functions by Coron et al. and in subsequent works, the initial value $$(IV)$$ of hash functions is fixed. In addition, these indifferentiability results do not depend on the Merkle–Damgård (MD) str...

  14. A novel method for one-way hash function construction based on spatiotemporal chaos

    International Nuclear Information System (INIS)

    Ren Haijun; Wang Yong; Xie Qing; Yang Huaqian

    2009-01-01

    A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.

  15. A novel method for one-way hash function construction based on spatiotemporal chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren Haijun [College of Software Engineering, Chongqing University, Chongqing 400044 (China); State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China)], E-mail: jhren@cqu.edu.cn; Wang Yong; Xie Qing [Key Laboratory of Electronic Commerce and Logistics of Chongqing, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Yang Huaqian [Department of Computer and Modern Education Technology, Chongqing Education of College, Chongqing 400067 (China)

    2009-11-30

    A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.

  16. Side channel analysis of some hash based MACs:A response to SHA-3 requirements

    DEFF Research Database (Denmark)

    The forthcoming NIST's Advanced Hash Standard (AHS) competition to select SHA-3 hash function requires that each candidate hash function submission must have at least one construction to support FIPS 198 HMAC application. As part of its evaluation, NIST is aiming to select either a candidate hash...... function which is more resistant to known side channel attacks (SCA) when plugged into HMAC, or that has an alternative MAC mode which is more resistant to known SCA than the other submitted alternatives. In response to this, we perform differential power analysis (DPA) on the possible smart card...... implementations of some of the recently proposed MAC alternatives to NMAC (a fully analyzed variant of HMAC) and HMAC algorithms and NMAC/HMAC versions of some recently proposed hash and compression function modes. We show that the recently proposed BNMAC and KMDP MAC schemes are even weaker than NMAC...

  17. HashLearn Now: Mobile Tutoring in India

    OpenAIRE

    Arun Kumar Agariya; Binay Krishna Shivam; Shashank Murali; Jyoti Tikoria

    2016-01-01

    Looking at today’s competitive exams scenario, a single mark may lead to a differentiation of rank in multiples of hundreds or even thousands. Looking at this problem from student’s perspective this article discusses the role of anywhere, anytime help for the students in getting answers for their problems on a real-time basis from the application known as HashLearn Now. The smart phones usage by students clearly signifies the importance of this application for getting their queries answered b...

  18. ANALISA FUNGSI HASH DALAM ENKRIPSI IDEA UNTUK KEAMANAN RECORD INFORMASI

    Directory of Open Access Journals (Sweden)

    Ramen Antonov Purba

    2014-02-01

    Full Text Available Issues of security and confidentiality of data is very important to organization or individual. If the data in a network of computers connected with a public network such as the Internet. Of course a very important data is viewed or hijacked by unauthorized persons. Because if this happens we will probably corrupted data can be lost even that will cause huge material losses. This research discusses the security system of sending messages/data using the encryption aims to maintain access of security a message from the people who are not authorized/ eligible. Because of this delivery system is very extensive security with the scope then this section is limited only parsing the IDEA Algorithm with hash functions, which include encryption, decryption. By combining the encryption IDEA methods (International Data Encryption Algorithm to encrypt the contents of the messages/data with the hash function to detect changes the content of messages/data is expected security level to be better. Results from this study a software that can perform encryption and decryption of messages/data, generate the security key based on the message/data is encrypted.

  19. An Ad Hoc Adaptive Hashing Technique forNon-Uniformly Distributed IP Address Lookup in Computer Networks

    Directory of Open Access Journals (Sweden)

    Christopher Martinez

    2007-02-01

    Full Text Available Hashing algorithms long have been widely adopted to design a fast address look-up process which involves a search through a large database to find a record associated with a given key. Hashing algorithms involve transforming a key inside each target data to a hash value hoping that the hashing would render the database a uniform distribution with respect to this new hash value. The close the final distribution is to uniform, the less search time would be required when a query is made. When the database is already key-wise uniformly distributed, any regular hashing algorithm, such as bit-extraction, bit-group XOR, etc., would easily lead to a statistically perfect uniform distribution after the hashing. On the other hand, if records in the database are instead not uniformly distributed as in almost all known practical applications, then even different regular hash functions would lead to very different performance. When the target database has a key with a highly skewed distributed value, performance delivered by regular hashing algorithms usually becomes far from desirable. This paper aims at designing a hashing algorithm to achieve the highest probability in leading to a uniformly distributed hash result from a non-uniformly distributed database. An analytical pre-process on the original database is first performed to extract critical information that would significantly benefit the design of a better hashing algorithm. This process includes sorting on the bits of the key to prioritize the use of them in the XOR hashing sequence, or in simple bit extraction, or even a combination of both. Such an ad hoc hash design is critical to adapting to all real-time situations when there exists a changing (and/or expanding database with an irregular non-uniform distribution. Significant improvement from simulation results is obtained in randomly generated data as well as real data.

  20. 32 CFR 634.8 - Implied consent.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Implied consent. 634.8 Section 634.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Driving Privileges § 634.8 Implied consent. (a) Implied consent to blood, breath, or urine tests....

  1. On Randomizing Hash Functions to Strengthen the Security of Digital Signatures

    DEFF Research Database (Denmark)

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX-hash-th...... that use Davies-Meyer schemes and a variant of RMX published by NIST in its Draft Special Publication (SP) 800-106. We discuss some important applications of our attack....

  2. On randomizing hash functions to strengthen the security of digital signatures

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Knudsen, Lars Ramkilde

    2009-01-01

    Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX-hash-th...... schemes that use Davies-Meyer schemes and a variant of RMX published by NIST in its Draft Special Publication (SP) 800-106. We discuss some important applications of our attack....

  3. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R [Livermore, CA

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  4. Rotation invariant deep binary hashing for fast image retrieval

    Science.gov (United States)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  5. Constructing a one-way hash function based on the unified chaotic system

    International Nuclear Information System (INIS)

    Long Min; Peng Fei; Chen Guanrong

    2008-01-01

    A new one-way hash function based on the unified chaotic system is constructed. With different values of a key parameter, the unified chaotic system represents different chaotic systems, based on which the one-way hash function algorithm is constructed with three round operations and an initial vector on an input message. In each round operation, the parameters are processed by three different chaotic systems generated from the unified chaotic system. Feed-forwards are used at the end of each round operation and at the end of each element of the message processing. Meanwhile, in each round operation, parameter-exchanging operations are implemented. Then, the hash value of length 160 bits is obtained from the last six parameters. Simulation and analysis both demonstrate that the algorithm has great flexibility, satisfactory hash performance, weak collision property, and high security. (general)

  6. Accelerating SPARQL queries by exploiting hash-based locality and adaptive partitioning

    KAUST Repository

    Al-Harbi, Razen; Abdelaziz, Ibrahim; Kalnis, Panos; Mamoulis, Nikos; Ebrahim, Yasser; Sahli, Majed

    2016-01-01

    State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which

  7. One-way hash function construction based on chaotic map network

    International Nuclear Information System (INIS)

    Yang Huaqian; Wong, K.-W.; Liao Xiaofeng; Wang Yong; Yang Degang

    2009-01-01

    A novel chaotic hash algorithm based on a network structure formed by 16 chaotic maps is proposed. The original message is first padded with zeros to make the length a multiple of four. Then it is divided into a number of blocks each contains 4 bytes. In the hashing process, the blocks are mixed together by the chaotic map network since the initial value and the control parameter of each tent map are dynamically determined by the output of its neighbors. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high flexibility, as required by practical keyed hash functions.

  8. Forecasting with Option-Implied Information

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Chang, Bo Young

    2013-01-01

    This chapter surveys the methods available for extracting information from option prices that can be used in forecasting. We consider option-implied volatilities, skewness, kurtosis, and densities. More generally, we discuss how any forecasting object that is a twice differentiable function...... of the future realization of the underlying risky asset price can utilize option-implied information in a well-defined manner. Going beyond the univariate option-implied density, we also consider results on option-implied covariance, correlation and beta forecasting, as well as the use of option......-implied information in cross-sectional forecasting of equity returns. We discuss how option-implied information can be adjusted for risk premia to remove biases in forecasting regressions....

  9. HashDist: Reproducible, Relocatable, Customizable, Cross-Platform Software Stacks for Open Hydrological Science

    Science.gov (United States)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Developing scientific software is a continuous balance between not reinventing the wheel and getting fragile codes to interoperate with one another. Binary software distributions such as Anaconda provide a robust starting point for many scientific software packages, but this solution alone is insufficient for many scientific software developers. HashDist provides a critical component of the development workflow, enabling highly customizable, source-driven, and reproducible builds for scientific software stacks, available from both the IPython Notebook and the command line. To address these issues, the Coastal and Hydraulics Laboratory at the US Army Engineer Research and Development Center has funded the development of HashDist in collaboration with Simula Research Laboratories and the University of Texas at Austin. HashDist is motivated by a functional approach to package build management, and features intelligent caching of sources and builds, parametrized build specifications, and the ability to interoperate with system compilers and packages. HashDist enables the easy specification of "software stacks", which allow both the novice user to install a default environment and the advanced user to configure every aspect of their build in a modular fashion. As an advanced feature, HashDist builds can be made relocatable, allowing the easy redistribution of binaries on all three major operating systems as well as cloud, and supercomputing platforms. As a final benefit, all HashDist builds are reproducible, with a build hash specifying exactly how each component of the software stack was installed. This talk discusses the role of HashDist in the hydrological sciences, including its use by the Coastal and Hydraulics Laboratory in the development and deployment of the Proteus Toolkit as well as the Rapid Operational Access and Maneuver Support project. We demonstrate HashDist in action, and show how it can effectively support development, deployment, teaching, and

  10. Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.

    Science.gov (United States)

    Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang

    2018-09-01

    Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.

  11. Practical security and privacy attacks against biometric hashing using sparse recovery

    Science.gov (United States)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  12. UQlust: combining profile hashing with linear-time ranking for efficient clustering and analysis of big macromolecular data.

    Science.gov (United States)

    Adamczak, Rafal; Meller, Jarek

    2016-12-28

    Advances in computing have enabled current protein and RNA structure prediction and molecular simulation methods to dramatically increase their sampling of conformational spaces. The quickly growing number of experimentally resolved structures, and databases such as the Protein Data Bank, also implies large scale structural similarity analyses to retrieve and classify macromolecular data. Consequently, the computational cost of structure comparison and clustering for large sets of macromolecular structures has become a bottleneck that necessitates further algorithmic improvements and development of efficient software solutions. uQlust is a versatile and easy-to-use tool for ultrafast ranking and clustering of macromolecular structures. uQlust makes use of structural profiles of proteins and nucleic acids, while combining a linear-time algorithm for implicit comparison of all pairs of models with profile hashing to enable efficient clustering of large data sets with a low memory footprint. In addition to ranking and clustering of large sets of models of the same protein or RNA molecule, uQlust can also be used in conjunction with fragment-based profiles in order to cluster structures of arbitrary length. For example, hierarchical clustering of the entire PDB using profile hashing can be performed on a typical laptop, thus opening an avenue for structural explorations previously limited to dedicated resources. The uQlust package is freely available under the GNU General Public License at https://github.com/uQlust . uQlust represents a drastic reduction in the computational complexity and memory requirements with respect to existing clustering and model quality assessment methods for macromolecular structure analysis, while yielding results on par with traditional approaches for both proteins and RNAs.

  13. Implied terms in English and Romanian law

    OpenAIRE

    Stefan Dinu

    2015-01-01

    This study analyses the matter of implied terms from the point of view of both English and Romanian law. First, the introductory section provides a brief overview of implied terms, by defining this class of contractual clauses and by providing their general features. Second, the English law position is analysed, where it is generally recognised that a term may be implied in one of three manners, which are described in turn. An emp hasis is made on the Privy Council’s decision in Attorney G...

  14. Option-implied measures of equity risk

    DEFF Research Database (Denmark)

    Chang, Bo-Young; Christoffersen, Peter; Vainberg, Gregory

    2012-01-01

    Equity risk measured by beta is of great interest to both academics and practitioners. Existing estimates of beta use historical returns. Many studies have found option-implied volatility to be a strong predictor of future realized volatility. We find that option-implied volatility and skewness...... are also good predictors of future realized beta. Motivated by this finding, we establish a set of assumptions needed to construct a beta estimate from option-implied return moments using equity and index options. This beta can be computed using only option data on a single day. It is therefore potentially...

  15. Implied terms in English and Romanian law

    Directory of Open Access Journals (Sweden)

    Stefan Dinu

    2015-12-01

    Full Text Available This study analyses the matter of implied terms from the point of view of both English and Romanian law. First, the introductory section provides a brief overview of implied terms, by defining this class of contractual clauses and by providing their general features. Second, the English law position is analysed, where it is generally recognised that a term may be implied in one of three manners, which are described in turn. An emp hasis is made on the Privy Council’s decision in Attorney General of Belize v Belize Telecom Ltd and its impact. Third, the Romanian law position is described, the starting point of the discussion being represented by the provisions of Article 1272 of the 2009 Civil Code. Fourth, the study ends by mentioning some points of comparison between the two legal systems in what concerns the approach towards implied terms.

  16. SEMANTIC SEGMENTATION OF BUILDING ELEMENTS USING POINT CLOUD HASHING

    Directory of Open Access Journals (Sweden)

    M. Chizhova

    2018-05-01

    Full Text Available For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect into different building types and structural elements (dome, nave, transept etc., including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling.

  17. Hash-chain-based authentication for IoT

    Directory of Open Access Journals (Sweden)

    Antonio PINTO

    2016-12-01

    Full Text Available The number of everyday interconnected devices continues to increase and constitute the Internet of Things (IoT. Things are small computers equipped with sensors and wireless communications capabilities that are driven by energy constraints, since they use batteries and may be required to operate over long periods of time. The majority of these devices perform data collection. The collected data is stored on-line using web-services that, sometimes, operate without any special considerations regarding security and privacy. The current work proposes a modified hash-chain authentication mechanism that, with the help of a smartphone, can authenticate each interaction of the devices with a REST web-service using One Time Passwords (OTP while using open wireless networks. Moreover, the proposed authentication mechanism adheres to the stateless, HTTP-like behavior expected of REST web-services, even allowing the caching of server authentication replies within a predefined time window. No other known web-service authentication mechanism operates in such manner.

  18. Parameter-free Locality Sensitive Hashing for Spherical Range Reporting

    DEFF Research Database (Denmark)

    Ahle, Thomas Dybdahl; Pagh, Rasmus; Aumüller, Martin

    2017-01-01

    We present a data structure for *spherical range reporting* on a point set S, i.e., reporting all points in S that lie within radius r of a given query point q. Our solution builds upon the Locality-Sensitive Hashing (LSH) framework of Indyk and Motwani, which represents the asymptotically best...... solutions to near neighbor problems in high dimensions. While traditional LSH data structures have several parameters whose optimal values depend on the distance distribution from q to the points of S, our data structure is parameter-free, except for the space usage, which is configurable by the user...... query time bounded by O(t(n/t)ρ), where t is the number of points to report and ρ∈(0,1) depends on the data distribution and the strength of the LSH family used. We further present a parameter-free way of using multi-probing, for LSH families that support it, and show that for many such families...

  19. Adoption of the Hash algorithm in a conceptual model for the civil registry of Ecuador

    Science.gov (United States)

    Toapanta, Moisés; Mafla, Enrique; Orizaga, Antonio

    2018-04-01

    The Hash security algorithm was analyzed in order to mitigate information security in a distributed architecture. The objective of this research is to develop a prototype for the Adoption of the algorithm Hash in a conceptual model for the Civil Registry of Ecuador. The deductive method was used in order to analyze the published articles that have a direct relation with the research project "Algorithms and Security Protocols for the Civil Registry of Ecuador" and articles related to the Hash security algorithm. It resulted from this research: That the SHA-1 security algorithm is appropriate for use in Ecuador's civil registry; we adopted the SHA-1 algorithm used in the flowchart technique and finally we obtained the adoption of the hash algorithm in a conceptual model. It is concluded that from the comparison of the DM5 and SHA-1 algorithm, it is suggested that in the case of an implementation, the SHA-1 algorithm is taken due to the amount of information and data available from the Civil Registry of Ecuador; It is determined that the SHA-1 algorithm that was defined using the flowchart technique can be modified according to the requirements of each institution; the model for adopting the hash algorithm in a conceptual model is a prototype that can be modified according to all the actors that make up each organization.

  20. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  1. An algorithm for the detection of move repetition without the use of hash-keys

    Directory of Open Access Journals (Sweden)

    Vučković Vladan

    2007-01-01

    Full Text Available This paper addresses the theoretical and practical aspects of an important problem in computer chess programming - the problem of draw detection in cases of position repetition. The standard approach used in the majority of computer chess programs is hash-oriented. This method is sufficient in most cases, as the Zobrist keys are already present due to the systemic positional hashing, so that they need not be computed anew for the purpose of draw detection. The new type of the algorithm that we have developed solves the problem of draw detection in cases when Zobrist keys are not used in the program, i.e. in cases when the memory is not hashed.

  2. Quicksort, largest bucket, and min-wise hashing with limited independence

    DEFF Research Database (Denmark)

    Knudsen, Mathias Bæk Tejs; Stöckel, Morten

    2015-01-01

    Randomized algorithms and data structures are often analyzed under the assumption of access to a perfect source of randomness. The most fundamental metric used to measure how “random” a hash function or a random number generator is, is its independence: a sequence of random variables is said...... to be k-independent if every variable is uniform and every size k subset is independent. In this paper we consider three classic algorithms under limited independence. Besides the theoretical interest in removing the unrealistic assumption of full independence, the work is motivated by lower independence...... being more practical. We provide new bounds for randomized quicksort, min-wise hashing and largest bucket size under limited independence. Our results can be summarized as follows. Randomized Quicksort. When pivot elements are computed using a 5-independent hash function, Karloff and Raghavan, J.ACM’93...

  3. One-way Hash function construction based on the chaotic map with changeable-parameter

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Deng Shaojiang

    2005-01-01

    An algorithm for one-way Hash function construction based on the chaotic map with changeable-parameter is proposed in this paper. A piecewise linear chaotic map with changeable-parameter P is chosen, and cipher block chaining mode (CBC) is introduced to ensure that the parameter P in each iteration is dynamically decided by the last-time iteration value and the corresponding message bit in different positions. The final Hash value is obtained by means of the linear transform on the iteration sequence. Theoretical analysis and computer simulation indicate that our algorithm can satisfy all the performance requirements of Hash function in an efficient and flexible manner. It is practicable and reliable, with high potential to be adopted for E-commerce

  4. Internal differential collision attacks on the reduced-round Grøstl-0 hash function

    DEFF Research Database (Denmark)

    Ideguchi, Kota; Tischhauser, Elmar Wolfgang; Preneel, Bart

    2014-01-01

    . This results in collision attacks and semi-free-start collision attacks on the Grøstl-0 hash function and compression function with reduced rounds. Specifically, we show collision attacks on the Grøstl-0-256 hash function reduced to 5 and 6 out of 10 rounds with time complexities 248 and 2112 and on the Grøstl......-0-512 hash function reduced to 6 out of 14 rounds with time complexity 2183. Furthermore, we demonstrate semi-free-start collision attacks on the Grøstl-0-256 compression function reduced to 8 rounds and the Grøstl-0-512 compression function reduced to 9 rounds. Finally, we show improved...

  5. One-way Hash function construction based on the chaotic map with changeable-parameter

    Energy Technology Data Exchange (ETDEWEB)

    Xiao Di [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China) and College of Mechanical Engineering, Chongqing University, Chongqing 400044 (China)]. E-mail: xiaodi_cqu@hotmail.com; Liao Xiaofeng [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)]. E-mail: xfliao@cqu.edu.cn; Deng Shaojiang [College of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)

    2005-04-01

    An algorithm for one-way Hash function construction based on the chaotic map with changeable-parameter is proposed in this paper. A piecewise linear chaotic map with changeable-parameter P is chosen, and cipher block chaining mode (CBC) is introduced to ensure that the parameter P in each iteration is dynamically decided by the last-time iteration value and the corresponding message bit in different positions. The final Hash value is obtained by means of the linear transform on the iteration sequence. Theoretical analysis and computer simulation indicate that our algorithm can satisfy all the performance requirements of Hash function in an efficient and flexible manner. It is practicable and reliable, with high potential to be adopted for E-commerce.

  6. Implied Volatility Surface: Construction Methodologies and Characteristics

    OpenAIRE

    Cristian Homescu

    2011-01-01

    The implied volatility surface (IVS) is a fundamental building block in computational finance. We provide a survey of methodologies for constructing such surfaces. We also discuss various topics which can influence the successful construction of IVS in practice: arbitrage-free conditions in both strike and time, how to perform extrapolation outside the core region, choice of calibrating functional and selection of numerical optimization algorithms, volatility surface dynamics and asymptotics.

  7. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.

  8. Quantum Hash function and its application to privacy amplification in quantum key distribution, pseudo-random number generation and image encryption

    Science.gov (United States)

    Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196

  9. Construction of secure and fast hash functions using nonbinary error-correcting codes

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Preneel, Bart

    2002-01-01

    constructions based on block ciphers such as the Data Encryption Standard (DES), where the key size is slightly smaller than the block size; IDEA, where the key size is twice the block size; Advanced Encryption Standard (AES), with a variable key size; and to MD4-like hash functions. Under reasonable...

  10. Security analysis of a one-way hash function based on spatiotemporal chaos

    International Nuclear Information System (INIS)

    Wang Shi-Hong; Shan Peng-Yang

    2011-01-01

    The collision and statistical properties of a one-way hash function based on spatiotemporal chaos are investigated. Analysis and simulation results indicate that collisions exist in the original algorithm and, therefore, the original algorithm is insecure and vulnerable. An improved algorithm is proposed to avoid the collisions. (general)

  11. Dakota - hashing from a combination of modular arithmetic and symmetric cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum...

  12. Dakota – Hashing from a Combination of Modular Arithmetic and Symmetric Cryptography

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    In this paper a cryptographic hash function is proposed, where collision resistance is based upon an assumption that involves squaring modulo an RSA modulus in combination with a one-way function that does not compress its input, and may therefore be constructed from standard techniques and assum...

  13. Rebound Attacks on the Reduced Grøstl Hash Function

    DEFF Research Database (Denmark)

    Mendel, Florian; Rechberger, C.; Schlaffer, Martin

    2010-01-01

    Grøstl is one of 14 second round candidates of the NIST SHA-3 competition. Cryptanalytic results on the wide-pipe compression function of Grøstl-256 have already been published. However, little is known about the hash function, arguably a much more interesting cryptanalytic setting. Also, Grøstl...

  14. Asset allocation using option-implied moments

    Science.gov (United States)

    Bahaludin, H.; Abdullah, M. H.; Tolos, S. M.

    2017-09-01

    This study uses an option-implied distribution as the input in asset allocation. The computation of risk-neutral densities (RND) are based on the Dow Jones Industrial Average (DJIA) index option and its constituents. Since the RNDs estimation does not incorporate risk premium, the conversion of RND into risk-world density (RWD) is required. The RWD is obtained through parametric calibration using the beta distributions. The mean, volatility, and covariance are then calculated to construct the portfolio. The performance of the portfolio is evaluated by using portfolio volatility and Sharpe ratio.

  15. XBRL How It Implies The Audit Process

    Directory of Open Access Journals (Sweden)

    Sepky Mardian

    2015-08-01

    Full Text Available This article aimed to know what is XBRL how it works and it implies to audit process. XBRL as a new tool was expected to produce a timelines reliable and credible financial reporting. With its real-time and interactive data XBRL will help the investor and other stakeholder in receiving storing analyzing the information quickly. While in audit profession XBRL will speed up the audit process save the audit cost and increase the revenue. However in fact XBRL will make it happen if it was implemented and integrated to an information system owned by datainformation provider.

  16. Local Deep Hashing Matching of Aerial Images Based on Relative Distance and Absolute Distance Constraints

    Directory of Open Access Journals (Sweden)

    Suting Chen

    2017-12-01

    Full Text Available Aerial images have features of high resolution, complex background, and usually require large amounts of calculation, however, most algorithms used in matching of aerial images adopt the shallow hand-crafted features expressed as floating-point descriptors (e.g., SIFT (Scale-invariant Feature Transform, SURF (Speeded Up Robust Features, which may suffer from poor matching speed and are not well represented in the literature. Here, we propose a novel Local Deep Hashing Matching (LDHM method for matching of aerial images with large size and with lower complexity or fast matching speed. The basic idea of the proposed algorithm is to utilize the deep network model in the local area of the aerial images, and study the local features, as well as the hash function of the images. Firstly, according to the course overlap rate of aerial images, the algorithm extracts the local areas for matching to avoid the processing of redundant information. Secondly, a triplet network structure is proposed to mine the deep features of the patches of the local image, and the learned features are imported to the hash layer, thus obtaining the representation of a binary hash code. Thirdly, the constraints of the positive samples to the absolute distance are added on the basis of the triplet loss, a new objective function is constructed to optimize the parameters of the network and enhance the discriminating capabilities of image patch features. Finally, the obtained deep hash code of each image patch is used for the similarity comparison of the image patches in the Hamming space to complete the matching of aerial images. The proposed LDHM algorithm evaluates the UltraCam-D dataset and a set of actual aerial images, simulation result demonstrates that it may significantly outperform the state-of-the-art algorithm in terms of the efficiency and performance.

  17. Random multispace quantization as an analytic mechanism for BioHashing of biometric and random identity inputs.

    Science.gov (United States)

    Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L

    2006-12-01

    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.

  18. Measurement contextuality is implied by macroscopic realism

    International Nuclear Information System (INIS)

    Chen Zeqian; Montina, A.

    2011-01-01

    Ontological theories of quantum mechanics provide a realistic description of single systems by means of well-defined quantities conditioning the measurement outcomes. In order to be complete, they should also fulfill the minimal condition of macroscopic realism. Under the assumption of outcome determinism and for Hilbert space dimension greater than 2, they were all proved to be contextual for projective measurements. In recent years a generalized concept of noncontextuality was introduced that applies also to the case of outcome indeterminism and unsharp measurements. It was pointed out that the Beltrametti-Bugajski model is an example of measurement noncontextual indeterminist theory. Here we provide a simple proof that this model is the only one with such a feature for projective measurements and Hilbert space dimension greater than 2. In other words, there is no extension of quantum theory providing more accurate predictions of outcomes and simultaneously preserving the minimal labeling of events through projective operators. As a corollary, noncontextuality for projective measurements implies noncontextuality for unsharp measurements. By noting that the condition of macroscopic realism requires an extension of quantum theory, unless a breaking of unitarity is invoked, we arrive at the conclusion that the only way to solve the measurement problem in the framework of an ontological theory is by relaxing the hypothesis of measurement noncontextuality in its generalized sense.

  19. HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database

    International Nuclear Information System (INIS)

    Parker, Quentin A; Bojičić, Ivan S; Frew, David J

    2016-01-01

    By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science. (paper)

  20. EFFICIENCY ANALYSIS OF HASHING METHODS FOR FILE SYSTEMS IN USER MODE

    Directory of Open Access Journals (Sweden)

    E. Y. Ivanov

    2013-05-01

    Full Text Available The article deals with characteristics and performance of interaction protocols between virtual file system and file system, their influence on processing power of microkernel operating systems. User mode implementation of ext2 file system for MINIX 3 OS is used to show that in microkernel operating systems file object identification time might increase up to 26 times in comparison with monolithic systems. Therefore, we present efficiency analysis of various hashing methods for file systems, running in user mode. Studies have shown that using hashing methods recommended in this paper it is possible to achieve competitive performance of the considered component of I/O stacks in microkernel and monolithic operating systems.

  1. Clifford Algebra Implying Three Fermion Generations Revisited

    International Nuclear Information System (INIS)

    Krolikowski, W.

    2002-01-01

    The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √p 2 → Γ (N) ·p works, leading to a sequence N=1, 2, 3, ... of Dirac-type equations, where four Dirac-type matrices Γ (N) μ are embedded into a Clifford algebra via a Jacobi definition introducing four ''centre-of-mass'' and (N - 1) x four ''relative'' Dirac-type matrices. These define one ''centre-of-mass'' and N - 1 ''relative'' Dirac bispinor indices. Secundo, the ''centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while N - 1 ''relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ''relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1, 3, 5 in the case of N odd, and two with N = 2, 4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3 x 3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is m τ = 1776.80 MeV, when the input of experimental m e and m μ is used. (author)

  2. Clifford Algebra Implying Three Fermion Generations Revisited

    Science.gov (United States)

    Krolikowski, Wojciech

    2002-09-01

    The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √ {p2} → {Γ }(N)p works, leading to a sequence N = 1,2,3, ... of Dirac-type equations, where four Dirac-type matrices {Γ }(N)μ are embedded into a Clifford algebra via a Jacobi definition introducing four ``centre-of-mass'' and (N-1)× four ``relative'' Dirac-type matrices. These define one ``centre-of-mass'' and (N-1) ``relative'' Dirac bispinor indices. Secundo, the ``centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while (N-1) ``relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ``relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1,3,5 in the case of N odd, and two with N = 2,4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3x3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is mτ = 1776.80 MeV, when the input of experimental me and mμ is used.

  3. Cryptanalysis on a parallel keyed hash function based on chaotic maps

    International Nuclear Information System (INIS)

    Guo Wei; Wang Xiaoming; He Dake; Cao Yang

    2009-01-01

    This Letter analyzes the security of a novel parallel keyed hash function based on chaotic maps, proposed by Xiao et al. to improve the efficiency in parallel computing environment. We show how to devise forgery attacks on Xiao's scheme with differential cryptanalysis and give the experiment results of two kinds of forgery attacks firstly. Furthermore, we discuss the problem of weak keys in the scheme and demonstrate how to utilize weak keys to construct collision.

  4. MULTIMEDIA DATA TRANSMISSION THROUGH TCP/IP USING HASH BASED FEC WITH AUTO-XOR SCHEME

    OpenAIRE

    R. Shalin; D. Kesavaraja

    2012-01-01

    The most preferred mode for communication of multimedia data is through the TCP/IP protocol. But on the other hand the TCP/IP protocol produces huge packet loss unavoidable due to network traffic and congestion. In order to provide a efficient communication it is necessary to recover the loss of packets. The proposed scheme implements Hash based FEC with auto XOR scheme for this purpose. The scheme is implemented through Forward error correction, MD5 and XOR for providing efficient transmissi...

  5. Simulation-Based Performance Evaluation of Predictive-Hashing Based Multicast Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Seonho Choi

    2012-12-01

    Full Text Available A predictive-hashing based Denial-of-Service (DoS resistant multicast authentication protocol was proposed based upon predictive-hashing, one-way key chain, erasure codes, and distillation codes techniques [4, 5]. It was claimed that this new scheme should be more resistant to various types of DoS attacks, and its worst-case resource requirements were derived in terms of coarse-level system parameters including CPU times for signature verification and erasure/distillation decoding operations, attack levels, etc. To show the effectiveness of our approach and to analyze exact resource requirements in various attack scenarios with different parameter settings, we designed and implemented an attack simulator which is platformindependent. Various attack scenarios may be created with different attack types and parameters against a receiver equipped with the predictive-hashing based protocol. The design of the simulator is explained, and the simulation results are presented with detailed resource usage statistics. In addition, resistance level to various types of DoS attacks is formulated with a newly defined resistance metric. By comparing these results to those from another approach, PRABS [8], we show that the resistance level of our protocol is greatly enhanced even in the presence of many attack streams.

  6. Analysis and Implementation of Cryptographic Hash Functions in Programmable Logic Devices

    Directory of Open Access Journals (Sweden)

    Tautvydas Brukštus

    2016-06-01

    Full Text Available In this day’s world, more and more focused on data pro-tection. For data protection using cryptographic science. It is also important for the safe storage of passwords for this uses a cryp-tographic hash function. In this article has been selected the SHA-256 cryptographic hash function to implement and explore, based on fact that it is now a popular and safe. SHA-256 cryp-tographic function did not find any theoretical gaps or conflict situations. Also SHA-256 cryptographic hash function used cryptographic currencies. Currently cryptographic currency is popular and their value is high. For the measurements have been chosen programmable logic integrated circuits as they less effi-ciency then ASIC. We chose Altera Corporation produced prog-rammable logic integrated circuits. Counting speed will be inves-tigated by three programmable logic integrated circuit. We will use programmable logic integrated circuits belong to the same family, but different generations. Each programmable logic integ-rated circuit made using different dimension technology. Choo-sing these programmable logic integrated circuits: EP3C16, EP4CE115 and 5CSEMA5F31. To compare calculations perfor-mances parameters are provided in the tables and graphs. Re-search show the calculation speed and stability of different prog-rammable logic circuits.

  7. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    Directory of Open Access Journals (Sweden)

    Ping Zeng

    Full Text Available In this paper, based on our previous multi-pattern uniform resource locator (URL binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  8. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    Science.gov (United States)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  9. Entropy-based implied volatility and its information content

    NARCIS (Netherlands)

    X. Xiao (Xiao); C. Zhou (Chen)

    2016-01-01

    markdownabstractThis paper investigates the maximum entropy approach on estimating implied volatility. The entropy approach also allows to measure option implied skewness and kurtosis nonparametrically, and to construct confidence intervals. Simulations show that the en- tropy approach outperforms

  10. Money market rates and implied CCAPM rates: some international evidence

    OpenAIRE

    Yamin Ahmad

    2004-01-01

    New Neoclassical Synthesis models equate the instrument of monetary policy to the implied CCAPM rate arising from an Euler equation. This paper identifies monetary policy shocks within six of the G7 countries and examines the movement of money market and implied CCAPM rates. The key result is that an increase in the nominal interest rate leads to a fall in the implied CCAPM rate. Incorporating habit still yields the same result. The findings suggest that the movement of these two rates implie...

  11. Implied Terms: The Foundation in Good Faith and Fair Dealing

    OpenAIRE

    2014-01-01

    With the aim of clarifying English law of implied terms in contracts and explaining their basis in the idea of good faith in performance, it is argued first that two, but no more, types of implied terms can be distinguished (terms implied in fact and terms implied by law), though it is explained why these types are frequently confused. Second, the technique of implication of terms is distinguished in most instances from the task of interpretation of contracts. Third, it is a...

  12. The Forecast Performance of Competing Implied Volatility Measures

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas

    This study examines the information content of alternative implied volatility measures for the 30 components of the Dow Jones Industrial Average Index from 1996 until 2007. Along with the popular Black-Scholes and "model-free" implied volatility expectations, the recently proposed corridor implie......, volatility definitions, loss functions and forecast evaluation settings....

  13. Accelerating SPARQL queries by exploiting hash-based locality and adaptive partitioning

    KAUST Repository

    Al-Harbi, Razen

    2016-02-08

    State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.

  14. UnoHop: Efficient Distributed Hash Table with O(1 Lookup Performance

    Directory of Open Access Journals (Sweden)

    Herry Sitepu

    2008-05-01

    Full Text Available Distributed Hash Tables (DHTs with O(1 lookup performance strive to minimize the maintenance traffic which required for propagating membership changes information (events. These events distribution allows each node in the peer-to-peer network maintains accurate routing tables with complete membership information. We present UnoHop, a novel DHT protocol with O(1 lookup performance. The protocol uses an efficient mechanism to distribute events through a dissemination tree that constructed dynamically rooted at the node that detect the events. Our protocol produces symmetric bandwidth usage at all nodes while decreasing the events propagation delay.

  15. MULTIMEDIA DATA TRANSMISSION THROUGH TCP/IP USING HASH BASED FEC WITH AUTO-XOR SCHEME

    Directory of Open Access Journals (Sweden)

    R. Shalin

    2012-09-01

    Full Text Available The most preferred mode for communication of multimedia data is through the TCP/IP protocol. But on the other hand the TCP/IP protocol produces huge packet loss unavoidable due to network traffic and congestion. In order to provide a efficient communication it is necessary to recover the loss of packets. The proposed scheme implements Hash based FEC with auto XOR scheme for this purpose. The scheme is implemented through Forward error correction, MD5 and XOR for providing efficient transmission of multimedia data. The proposed scheme provides transmission high accuracy, throughput and low latency and loss.

  16. A hash based mutual RFID tag authentication protocol in telecare medicine information system.

    Science.gov (United States)

    Srivastava, Keerti; Awasthi, Amit K; Kaul, Sonam D; Mittal, R C

    2015-01-01

    Radio Frequency Identification (RFID) is a technology which has multidimensional applications to reduce the complexity of today life. Everywhere, like access control, transportation, real-time inventory, asset management and automated payment systems etc., RFID has its enormous use. Recently, this technology is opening its wings in healthcare environments, where potential applications include patient monitoring, object traceability and drug administration systems etc. In this paper, we propose a secure RFID-based protocol for the medical sector. This protocol is based on hash operation with synchronized secret. The protocol is safe against active and passive attacks such as forgery, traceability, replay and de-synchronization attack.

  17. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  18. Comparison of Various Similarity Measures for Average Image Hash in Mobile Phone Application

    Science.gov (United States)

    Farisa Chaerul Haviana, Sam; Taufik, Muhammad

    2017-04-01

    One of the main issue in Content Based Image Retrieval (CIBR) is similarity measures for resulting image hashes. The main key challenge is to find the most benefits distance or similarity measures for calculating the similarity in term of speed and computing costs, specially under limited computing capabilities device like mobile phone. This study we utilize twelve most common and popular distance or similarity measures technique implemented in mobile phone application, to be compared and studied. The results show that all similarity measures implemented in this study was perform equally under mobile phone application. This gives more possibilities for method combinations to be implemented for image retrieval.

  19. MiMC: Efficient encryption and cryptographic hashing with minimal multiplicative complexity

    DEFF Research Database (Denmark)

    Albrecht, Martin; Grassi, Lorenzo; Rechberger, Christian

    2016-01-01

    and cryptographic hash functions is to reconsider and simplify the round function of the Knudsen-Nyberg cipher from 1995. The mapping F(x) := x3 is used as the main component there and is also the main component of our family of proposals called “MiMC”. We study various attack vectors for this construction and give...... a new attack vector that outperforms others in relevant settings. Due to its very low number of multiplications, the design lends itself well to a large class of applications, especially when the depth does not matter but the total number of multiplications in the circuit dominates all aspects...

  20. MinHash-Based Fuzzy Keyword Search of Encrypted Data across Multiple Cloud Servers

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2018-05-01

    Full Text Available To enhance the efficiency of data searching, most data owners store their data files in different cloud servers in the form of cipher-text. Thus, efficient search using fuzzy keywords becomes a critical issue in such a cloud computing environment. This paper proposes a method that aims at improving the efficiency of cipher-text retrieval and lowering storage overhead for fuzzy keyword search. In contrast to traditional approaches, the proposed method can reduce the complexity of Min-Hash-based fuzzy keyword search by using Min-Hash fingerprints to avoid the need to construct the fuzzy keyword set. The method will utilize Jaccard similarity to rank the results of retrieval, thus reducing the amount of calculation for similarity and saving a lot of time and space overhead. The method will also take consideration of multiple user queries through re-encryption technology and update user permissions dynamically. Security analysis demonstrates that the method can provide better privacy preservation and experimental results show that efficiency of cipher-text using the proposed method can improve the retrieval time and lower storage overhead as well.

  1. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  2. Fast Structural Alignment of Biomolecules Using a Hash Table, N-Grams and String Descriptors

    Directory of Open Access Journals (Sweden)

    Robert Preissner

    2009-04-01

    Full Text Available This work presents a generalized approach for the fast structural alignment of thousands of macromolecular structures. The method uses string representations of a macromolecular structure and a hash table that stores n-grams of a certain size for searching. To this end, macromolecular structure-to-string translators were implemented for protein and RNA structures. A query against the index is performed in two hierarchical steps to unite speed and precision. In the first step the query structure is translated into n-grams, and all target structures containing these n-grams are retrieved from the hash table. In the second step all corresponding n-grams of the query and each target structure are subsequently aligned, and after each alignment a score is calculated based on the matching n-grams of query and target. The extendable framework enables the user to query and structurally align thousands of protein and RNA structures on a commodity machine and is available as open source from http://lajolla.sf.net.

  3. Introduction to the theory and application of a unified Bohm criterion for arbitrary-ion-temperature collision-free plasmas with finite Debye lengths

    Science.gov (United States)

    Kos, L.; Jelić, N.; Kuhn, S.; Tskhakaya, D. D.

    2018-04-01

    Tonks-Langmuir collision-free, plane-parallel discharge model [Phys. Rev. 34, 876 (1929)], however, with the ion-source temperature extended here from the original (zero) value to arbitrary high ones. In addition, it turns out, that the charge-density derivative (in the potential "space") with respect to the potential exhibits two characteristic points, i.e., potentials, namely the points of inflection and maximum of that derivative (in the potential space), which stay "fixed" at their respective potentials independent of the Debye length until it is kept fairly small. Plasma quasi-neutrality appears well satisfied up to the first characteristic point/potential, so we identify that one as the plasma edge (PE). Adopting the convention that the sheath is a region characterized by considerable electrostatic pressure (energy density), we identify the second characteristic point/potential as the sheath edge (SE). Between these points, the charge density increases from zero to a finite value. Thus, the interval between the PE and SE, with the "fixed" width (in the potential "space") of about one third of the electron temperature, will be named the plasma-sheath transition (PST). Outside the PST, the electrostatic-pressure term and its derivatives turn out to be nearly identical with each other, independent of the particular values of the ion temperature and Debye length. In contrast, an increase in Debye lengths from zero to finite values causes the location of the sonic point/potential (laying inside the PST) to shift from the PE (for vanishing Debye length) towards the SE, while at the same time, the absolute value of the corresponding ion-sound velocity slightly decreases. These shifts turn out to be manageable with employing the mathematical concept of the plasma-to-sheath transition (different from, but related to our natural PST concept), resulting in approximate, but sufficiently reliable semi-analytic expressions, which are functions of the ion temperature and Debye

  4. 76 FR 7817 - Announcing Draft Federal Information Processing Standard 180-4, Secure Hash Standard, and Request...

    Science.gov (United States)

    2011-02-11

    ...-02] Announcing Draft Federal Information Processing Standard 180-4, Secure Hash Standard, and Request... and request for comments. SUMMARY: This notice announces the Draft Federal Information Processing..., Information Technology Laboratory, Attention: Comments on Draft FIPS 180-4, 100 Bureau Drive--Stop 8930...

  5. Implied liquidity : towards stochastic liquidity modeling and liquidity trading

    NARCIS (Netherlands)

    Corcuera, J.M.; Guillaume, F.M.Y.; Madan, D.B.; Schoutens, W.

    2010-01-01

    In this paper we introduce the concept of implied (il)liquidity of vanilla options. Implied liquidity is based on the fundamental theory of conic finance, in which the one-price model is abandoned and replaced by a two-price model giving bid and ask prices for traded assets. The pricing is done by

  6. Asymptotic formulae for implied volatility in the Heston model

    OpenAIRE

    Forde, Martin; Jacquier, Antoine; Mijatovic, Aleksandar

    2009-01-01

    In this paper we prove an approximate formula expressed in terms of elementary functions for the implied volatility in the Heston model. The formula consists of the constant and first order terms in the large maturity expansion of the implied volatility function. The proof is based on saddlepoint methods and classical properties of holomorphic functions.

  7. A Dynamic Linear Hashing Method for Redundancy Management in Train Ethernet Consist Network

    Directory of Open Access Journals (Sweden)

    Xiaobo Nie

    2016-01-01

    Full Text Available Massive transportation systems like trains are considered critical systems because they use the communication network to control essential subsystems on board. Critical system requires zero recovery time when a failure occurs in a communication network. The newly published IEC62439-3 defines the high-availability seamless redundancy protocol, which fulfills this requirement and ensures no frame loss in the presence of an error. This paper adopts these for train Ethernet consist network. The challenge is management of the circulating frames, capable of dealing with real-time processing requirements, fast switching times, high throughout, and deterministic behavior. The main contribution of this paper is the in-depth analysis it makes of network parameters imposed by the application of the protocols to train control and monitoring system (TCMS and the redundant circulating frames discarding method based on a dynamic linear hashing, using the fastest method in order to resolve all the issues that are dealt with.

  8. Paradeisos: A perfect hashing algorithm for many-body eigenvalue problems

    Science.gov (United States)

    Jia, C. J.; Wang, Y.; Mendl, C. B.; Moritz, B.; Devereaux, T. P.

    2018-03-01

    We describe an essentially perfect hashing algorithm for calculating the position of an element in an ordered list, appropriate for the construction and manipulation of many-body Hamiltonian, sparse matrices. Each element of the list corresponds to an integer value whose binary representation reflects the occupation of single-particle basis states for each element in the many-body Hilbert space. The algorithm replaces conventional methods, such as binary search, for locating the elements of the ordered list, eliminating the need to store the integer representation for each element, without increasing the computational complexity. Combined with the "checkerboard" decomposition of the Hamiltonian matrix for distribution over parallel computing environments, this leads to a substantial savings in aggregate memory. While the algorithm can be applied broadly to many-body, correlated problems, we demonstrate its utility in reducing total memory consumption for a series of fermionic single-band Hubbard model calculations on small clusters with progressively larger Hilbert space dimension.

  9. A Reusable Software Copy Protection Using Hash Result and Asymetrical Encryption

    Directory of Open Access Journals (Sweden)

    Aswin Wibisurya

    2014-12-01

    Full Text Available Desktop application is one of the most popular types of application being used in computer due to the one time install simplicity and the quick accessibility from the moment the computer being turned on. Limitation of the copy and usage of desktop applications has long been an important issue to application providers. For security concerns, software copy protection is usually integrated with the application. However, developers seek to reuse the copy protection component of the software. This paper proposes an approach of reusable software copy protection which consists of a certificate validator on the client computer and a certificate generator on the server. The certificate validator integrity is protected using hashing result while all communications are encrypted using asymmetrical encryption to ensure the security of this approach.

  10. A hybrid cloud read aligner based on MinHash and kmer voting that preserves privacy

    Science.gov (United States)

    Popic, Victoria; Batzoglou, Serafim

    2017-05-01

    Low-cost clouds can alleviate the compute and storage burden of the genome sequencing data explosion. However, moving personal genome data analysis to the cloud can raise serious privacy concerns. Here, we devise a method named Balaur, a privacy preserving read mapper for hybrid clouds based on locality sensitive hashing and kmer voting. Balaur can securely outsource a substantial fraction of the computation to the public cloud, while being highly competitive in accuracy and speed with non-private state-of-the-art read aligners on short read data. We also show that the method is significantly faster than the state of the art in long read mapping. Therefore, Balaur can enable institutions handling massive genomic data sets to shift part of their analysis to the cloud without sacrificing accuracy or exposing sensitive information to an untrusted third party.

  11. Multifractal analysis of implied volatility in index options

    Science.gov (United States)

    Oh, GabJin

    2014-06-01

    In this paper, we analyze the statistical and the non-linear properties of the log-variations in implied volatility for the CAC40, DAX and S& P500 daily index options. The price of an index option is generally represented by its implied volatility surface, including its smile and skew properties. We utilize a Lévy process model as the underlying asset to deepen our understanding of the intrinsic property of the implied volatility in the index options and estimate the implied volatility surface. We find that the options pricing models with the exponential Lévy model can reproduce the smile or sneer features of the implied volatility that are observed in real options markets. We study the variation in the implied volatility for at-the-money index call and put options, and we find that the distribution function follows a power-law distribution with an exponent of 3.5 ≤ γ ≤ 4.5. Especially, the variation in the implied volatility exhibits multifractal spectral characteristics, and the global financial crisis has influenced the complexity of the option markets.

  12. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  13. Electrodermal responses to implied versus actual violence on television.

    Science.gov (United States)

    Kalamas, A D; Gruber, M L

    1998-01-01

    The electrodermal response (EDR) of children watching a violent show was measured. Particular attention was paid to the type of violence (actual or implied) that prompted an EDR. In addition, the impact of the auditory component (sounds associated with violence) of the show was evaluated. Implied violent stimuli, such as the villain's face, elicited the strongest EDR. The elements that elicited the weakest responses were the actual violent stimuli, such as stabbing. The background noise and voices of the sound track enhanced the total number of EDRs. The results suggest that implied violence may elicit more fear (as measured by EDRs) than actual violence does and that sounds alone contribute significantly to the emotional response to television violence. One should not, therefore, categorically assume that a show with mostly actual violence evokes less fear than one with mostly implied violence.

  14. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  15. Predicting Agency Rating Migrations with Spread Implied Ratings

    OpenAIRE

    Jianming Kou; Dr Simone Varotto

    2005-01-01

    Investors traditionally rely on credit ratings to price debt instruments. However, rating agencies are known to be prudent in their approach to rating revisions, which results in delayed ratings adjustments to mutating credit conditions. For a large set of eurobonds we derive credit spread implied ratings and compare them with the ratings issued by rating agencies. Our results indicate that spread implied ratings often anticipate future movement of agency ratings and hence could help track cr...

  16. Uniform Bounds for Black--Scholes Implied Volatility

    OpenAIRE

    Tehranchi, Michael Rummine

    2016-01-01

    In this note, Black--Scholes implied volatility is expressed in terms of various optimization problems. From these representations, upper and lower bounds are derived which hold uniformly across moneyness and call price. Various symmetries of the Black--Scholes formula are exploited to derive new bounds from old. These bounds are used to reprove asymptotic formulas for implied volatility at extreme strikes and/or maturities. the Society for Industrial and Applied Mathematics 10.1137/14095248X

  17. Uniform bounds for Black--Scholes implied volatility

    OpenAIRE

    Tehranchi, Michael R.

    2015-01-01

    In this note, Black--Scholes implied volatility is expressed in terms of various optimisation problems. From these representations, upper and lower bounds are derived which hold uniformly across moneyness and call price. Various symmetries of the Black--Scholes formula are exploited to derive new bounds from old. These bounds are used to reprove asymptotic formulae for implied volatility at extreme strikes and/or maturities.

  18. Long memory and the relation between implied and realized volatility

    OpenAIRE

    Federico Bandi; Benoit Perron

    2003-01-01

    We argue that the conventional predictive regression between implied volatility (regressor) and realized volatility over the remaining life of the option (regressand) is likely to be a fractional cointegrating relation. Since cointegration is associated with long-run comovements, this finding modifies the usual interpretation of such regression as a study towards assessing option market efficiency (given a certain option pricing model) and/or short-term unbiasedness of implied volatility as a...

  19. Long memory persistence in the factor of Implied volatility dynamics

    OpenAIRE

    Härdle, Wolfgang Karl; Mungo, Julius

    2007-01-01

    The volatility implied by observed market prices as a function of the strike and time to maturity form an Implied Volatility Surface (IV S). Practical applications require reducing the dimension and characterize its dynamics through a small number of factors. Such dimension reduction is summarized by a Dynamic Semiparametric Factor Model (DSFM) that characterizes the IV S itself and their movements across time by a multivariate time series of factor loadings. This paper focuses on investigati...

  20. A Hash Based Remote User Authentication and Authenticated Key Agreement Scheme for the Integrated EPR Information System.

    Science.gov (United States)

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng

    2015-11-01

    To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.

  1. US Implied Volatility as A predictor of International Returns

    Directory of Open Access Journals (Sweden)

    Mehmet F. Dicle

    2017-12-01

    Full Text Available This study provides evidence of the US implied volatility’s e ect on international equitymarkets’ returns. This evidence has two main implications: i investors may find that foreign equityreturns adjusting to US implied volatility may not provide true diversification benefits, and ii foreignequity returns may be predicted using US implied volatility. Our sample includes US volatility index(VIX and major equity indexes in twenty countries for the period between January, 2000 throughJuly, 2017. VIX leads eighteen of the international markets and Granger causes seventeen of themarkets after controlling for the S&P-500 index returns and the 2007/2008 US financial crisis. USinvestors looking to diversify US risk may find that international equities may not provide intendeddiversification benefits. Our evidence provides support for predictability of international equity returnsbased on US volatility.

  2. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  3. Tachyons imply the existence of a privileged frame

    Energy Technology Data Exchange (ETDEWEB)

    Sjoedin, T.; Heylighen, F.

    1985-12-16

    It is shown that the existence of faster-than-light signals (tachyons) would imply the existence (and detectability) of a privileged inertial frame and that one can avoid all problems with reversed-time order only by using absolute synchronization instead of the standard one. The connection between these results and the EPR-paradox is discussed.

  4. Comprehending Implied Meaning in English as a Foreign Language

    Science.gov (United States)

    Taguchi, Naoko

    2005-01-01

    This study investigated whether second language (L2) proficiency affects pragmatic comprehension, namely the ability to comprehend implied meaning in spoken dialogues, in terms of accuracy and speed of comprehension. Participants included 46 native English speakers at a U.S. university and 160 Japanese students of English in a college in Japan who…

  5. An Access Control Protocol for Wireless Sensor Network Using Double Trapdoor Chameleon Hash Function

    Directory of Open Access Journals (Sweden)

    Tejeshwari Thakur

    2016-01-01

    Full Text Available Wireless sensor network (WSN, a type of communication system, is normally deployed into the unattended environment where the intended user can get access to the network. The sensor nodes collect data from this environment. If the data are valuable and confidential, then security measures are needed to protect them from the unauthorized access. This situation requires an access control protocol (ACP in the design of sensor network because of sensor nodes which are vulnerable to various malicious attacks during the authentication and key establishment and the new node addition phase. In this paper, we propose a secured ACP for such WSN. This protocol is based on Elliptic Curve Discrete Log Problem (ECDLP and double trapdoor chameleon hash function which secures the WSN from malicious attacks such as node masquerading attack, replay attack, man-in-the-middle attack, and forgery attacks. Proposed ACP has a special feature known as session key security. Also, the proposed ACP is more efficient as it requires only one modular multiplication during the initialization phase.

  6. User characteristics and effect profile of Butane Hash Oil: An extremely high-potency cannabis concentrate.

    Science.gov (United States)

    Chan, Gary C K; Hall, Wayne; Freeman, Tom P; Ferris, Jason; Kelly, Adrian B; Winstock, Adam

    2017-09-01

    Recent reports suggest an increase in use of extremely potent cannabis concentrates such as Butane Hash Oil (BHO) in some developed countries. The aims of this study were to examine the characteristics of BHO users and the effect profiles of BHO. Anonymous online survey in over 20 countries in 2014 and 2015. Participants aged 18 years or older were recruited through onward promotion and online social networks. The overall sample size was 181,870. In this sample, 46% (N=83,867) reported using some form of cannabis in the past year, and 3% reported BHO use (n=5922). Participants reported their use of 7 types of cannabis in the past 12 months, the source of their cannabis, reasons for use, use of other illegal substances, and lifetime diagnosis for depression, anxiety and psychosis. Participants were asked to rate subjective effects of BHO and high potency herbal cannabis. Participants who reported a lifetime diagnosis of depression (OR=1.15, p=0.003), anxiety (OR=1.72, pcannabis. BHO users also reported stronger negative effects and less positive effects when using BHO than high potency herbal cannabis (pcannabis. Copyright © 2017. Published by Elsevier B.V.

  7. Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy

    Science.gov (United States)

    Doi, Yusuke; Wakayama, Shirou; Ozaki, Satoshi

    To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.

  8. Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing

    Science.gov (United States)

    Li-Chee-Ming, J.; Armenakis, C.

    2017-05-01

    This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.

  9. A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function

    Science.gov (United States)

    Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078

  10. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  11. A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Das

    2014-01-01

    Full Text Available In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.

  12. A robust and effective smart-card-based remote user authentication mechanism using hash function.

    Science.gov (United States)

    Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.

  13. An enhanced biometric authentication scheme for telecare medicine information systems with nonce using chaotic hash function.

    Science.gov (United States)

    Das, Ashok Kumar; Goswami, Adrijit

    2014-06-01

    Recently, Awasthi and Srivastava proposed a novel biometric remote user authentication scheme for the telecare medicine information system (TMIS) with nonce. Their scheme is very efficient as it is based on efficient chaotic one-way hash function and bitwise XOR operations. In this paper, we first analyze Awasthi-Srivastava's scheme and then show that their scheme has several drawbacks: (1) incorrect password change phase, (2) fails to preserve user anonymity property, (3) fails to establish a secret session key beween a legal user and the server, (4) fails to protect strong replay attack, and (5) lacks rigorous formal security analysis. We then a propose a novel and secure biometric-based remote user authentication scheme in order to withstand the security flaw found in Awasthi-Srivastava's scheme and enhance the features required for an idle user authentication scheme. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. In addition, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks, including the replay and man-in-the-middle attacks. Our scheme is also efficient as compared to Awasthi-Srivastava's scheme.

  14. Estimating implied rates of discount in healthcare decision-making.

    Science.gov (United States)

    West, R R; McNabb, R; Thompson, A G H; Sheldon, T A; Grimley Evans, J

    2003-01-01

    To consider whether implied rates of discounting from the perspectives of individual and society differ, and whether implied rates of discounting in health differ from those implied in choices involving finance or "goods". The study comprised first a review of economics, health economics and social science literature and then an empirical estimate of implied rates of discounting in four fields: personal financial, personal health, public financial and public health, in representative samples of the public and of healthcare professionals. Samples were drawn in the former county and health authority district of South Glamorgan, Wales. The public sample was a representative random sample of men and women, aged over 18 years and drawn from electoral registers. The health professional sample was drawn at random with the cooperation of professional leads to include doctors, nurses, professions allied to medicine, public health, planners and administrators. The literature review revealed few empirical studies in representative samples of the population, few direct comparisons of public with private decision-making and few direct comparisons of health with financial discounting. Implied rates of discounting varied widely and studies suggested that discount rates are higher the smaller the value of the outcome and the shorter the period considered. The relationship between implied discount rates and personal attributes was mixed, possibly reflecting the limited nature of the samples. Although there were few direct comparisons, some studies found that individuals apply different rates of discount to social compared with private comparisons and health compared with financial. The present study also found a wide range of implied discount rates, with little systematic effect of age, gender, educational level or long-term illness. There was evidence, in both samples, that people chose a lower rate of discount in comparisons made on behalf of society than in comparisons made for

  15. Intuitive understanding of nonlocality as implied by quantum theory

    International Nuclear Information System (INIS)

    Bohm, D.G.; Hiley, B.J.

    1975-01-01

    The fact is brought out that the essential new quality implied by the quantum theory is nonlocality; i.e., that a system cannot be analyzed into parts whose basic properties do not depend on the state of the whole system. This is done in terms of the causal interpretation of the quantum theory, proposed by one of us (D.B.) in 2952, involving the introduction of the ''quantum potential.'' It is shown that this approach implies a new universal type of description, in which the standard or canonical form is always supersystem-system-subsystem; and this leads to the radically new notion of unbroken wholeness of the entire universe. Finally, some of the implications of extending these notions to the relativity domain, and in so doing, a novel concept of time, in terms of which relativity and quantum theory may eventually be brought together, is indicated

  16. Implied Movement in Static Images Reveals Biological Timing Processing

    Directory of Open Access Journals (Sweden)

    Francisco Carlos Nather

    2015-08-01

    Full Text Available Visual perception is adapted toward a better understanding of our own movements than those of non-conspecifics. The present study determined whether time perception is affected by pictures of different species by considering the evolutionary scale. Static (“S” and implied movement (“M” images of a dog, cheetah, chimpanzee, and man were presented to undergraduate students. S and M images of the same species were presented in random order or one after the other (S-M or M-S for two groups of participants. Movement, Velocity, and Arousal semantic scales were used to characterize some properties of the images. Implied movement affected time perception, in which M images were overestimated. The results are discussed in terms of visual motion perception related to biological timing processing that could be established early in terms of the adaptation of humankind to the environment.

  17. Language comprehenders retain implied shape and orientation of objects.

    Science.gov (United States)

    Pecher, Diane; van Dantzig, Saskia; Zwaan, Rolf A; Zeelenberg, René

    2009-06-01

    According to theories of embodied cognition, language comprehenders simulate sensorimotor experiences to represent the meaning of what they read. Previous studies have shown that picture recognition is better if the object in the picture matches the orientation or shape implied by a preceding sentence. In order to test whether strategic imagery may explain previous findings, language comprehenders first read a list of sentences in which objects were mentioned. Only once the complete list had been read was recognition memory tested with pictures. Recognition performance was better if the orientation or shape of the object matched that implied by the sentence, both immediately after reading the complete list of sentences and after a 45-min delay. These results suggest that previously found match effects were not due to strategic imagery and show that details of sensorimotor simulations are retained over longer periods.

  18. Implied Materiality and Material Disclosures of Credit Ratings

    OpenAIRE

    Eccles, Robert G; Youmans, Timothy John

    2015-01-01

    This first of three papers in our series on materiality in credit ratings will examine the materiality of credit ratings from an “implied materiality” and governance disclosure perspective. In the second paper, we will explore the materiality of environmental, social, and governance (ESG) factors in credit ratings’ methodologies and introduce the concept of “layered materiality.” In the third paper, we will evaluate current and potential credit rating agency (CRA) business models based on our...

  19. On return rate implied by behavioural present value

    OpenAIRE

    Piasecki, Krzysztof

    2013-01-01

    The future value of a security is described as a random variable. Distribution of this random variable is the formal image of risk uncertainty. On the other side, any present value is defined as a value equivalent to the given future value. This equivalence relationship is a subjective. Thus follows, that present value is described as a fuzzy number, which is depend on the investor's susceptibility to behavioural factors. All above reasons imply, that return rate is given as a fuzzy probabili...

  20. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  1. TEMPORAL INSENSITIVITY OF PVWTP AND IMPLIED DISCOUNT RATES IN CVM

    OpenAIRE

    Kim, Sooil; Haab, Timothy C.

    2003-01-01

    The sensitivity of WTP is tested in terms of the present value and the implied discount rates are derived by varying the length of benefit and the temporal payment schedules. Results show that holding the length of the project constant, the present value of willingness to pay does not vary significantly across payment schemes (one time payment, versus life of the project, versus perpetuity). Heteroskedasticity of error term over payment schemes fails to be accepted. Holding the payment scheme...

  2. Associations between butane hash oil use and cannabis-related problems.

    Science.gov (United States)

    Meier, Madeline H

    2017-10-01

    High-potency cannabis concentrates are increasingly popular in the United States, and there is concern that use of high-potency cannabis might increase risk for cannabis-related problems. However, little is known about the potential negative consequences of concentrate use. This study reports on associations between past-year use of a high-potency cannabis concentrate, known as butane hash oil (BHO), and cannabis-related problems. A sample of 821 college students were recruited to complete a survey about their health and behavior. Participants who had used cannabis in the past year (33%, n=273) completed questions about their cannabis use, including their use of BHO and cannabis-related problems in eight domains: physical dependence, impaired control, academic-occupational problems, social-interpersonal problems, self-care problems, self-perception, risk behavior, and blackouts. Approximately 44% (n=121) of past-year cannabis users had used BHO in the past year. More frequent BHO use was associated with higher levels of physical dependence (RR=1.8, pcannabis-related academic/occupational problems (RR=1.5, p=0.004), poor self-care (RR=1.3, p=0.002), and cannabis-related risk behavior (RR=1.2, p=0.001). After accounting for sociodemographic factors, age of onset of cannabis use, sensation seeking, overall frequency of cannabis use, and frequency of other substance use, BHO use was still associated with higher levels of physical dependence (RR=1.2, p=0.014). BHO use is associated with greater physiological dependence on cannabis, even after accounting for potential confounders. Longitudinal research is needed to determine if cannabis users with higher levels of physiological dependence seek out BHO and/or if BHO use increases risk for physiological dependence. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Interpreting the implied meridional oceanic energy transport in AMIP

    International Nuclear Information System (INIS)

    Randall, D.A.; Gleckler, P.J.

    1993-09-01

    The Atmospheric Model Intercomparison Project (AMIP) was outlined in Paper No. CLIM VAR 2.3 (entitled open-quote The validation of ocean surface heat fluxes in AMIP') of these proceedings. Preliminary results of AMIP subproject No. 5 were also summarized. In particular, zonally averaged ocean surface heat fluxes resulting from various AMIP simulations were intercompared, and to the extent possible they were validated with uncertainties in observationally-based estimates of surface heat fluxes. The intercomparison is continued in this paper by examining the Oceanic Meridional Energy Transport (OMET) implied by the net surface heat fluxes of the AMIP simulations. As with the surface heat fluxes of the AMIP simulations. As with the surface heat fluxes, the perspective here will be very cursory. The annual mean implied ocean heat transport can be estimated by integrating the zonally averaged net ocean surface heat flux, N sfc , from one pole to the other. In AGCM simulations (and perhaps reality), the global mean N sfc is typically not in exact balance when averaged over one or more years. Because of this, an important assumption must be made about changes in the distribution of energy in the oceans. Otherwise, the integration will yield a non-zero transport at the endpoint of integration (pole) which is not physically realistic. Here the authors will only look at 10-year means of the AMIP runs, and for simplicity they assume that any long term imbalance in the global averaged N sfc will be sequestered (or released) over the global ocean. Tests have demonstrated that the treatment of how the global average energy imbalance is assumed to be distributed is important, especially when the long term imbalances are in excess of 10 W m -2 . However, this has not had a substantial impact on the qualitative features of the implied heat transport of the AMIP simulations examined thus far

  4. Implied reading direction and prioritization of letter encoding.

    Science.gov (United States)

    Holcombe, Alex O; Nguyen, Elizabeth H L; Goodbourn, Patrick T

    2017-10-01

    Capacity limits hinder processing of multiple stimuli, contributing to poorer performance for identifying two briefly presented letters than for identifying a single letter. Higher accuracy is typically found for identifying the letter on the left, which has been attributed to a right-hemisphere dominance for selective attention. Here, we use rapid serial visual presentation (RSVP) of letters in two locations at once. The letters to be identified are simultaneous and cued by rings. In the first experiment, we manipulated implied reading direction by rotating or mirror-reversing the letters to face to the left rather than to the right. The left-side performance advantage was eliminated. In the second experiment, letters were positioned above and below fixation, oriented such that they appeared to face downward (90° clockwise rotation) or upward (90° counterclockwise rotation). Again consistent with an effect of implied reading direction, performance was better for the top position in the downward condition, but not in the upward condition. In both experiments, mixture modeling of participants' report errors revealed that attentional sampling from the two locations was approximately simultaneous, ruling out the theory that the letter on one side was processed first, followed by a shift of attention to sample the other letter. Thus, the orientation of the letters apparently controls not when the letters are sampled from the scene, but rather the dynamics of a subsequent process, such as tokenization or memory consolidation. Implied reading direction appears to determine the letter prioritized at a high-level processing bottleneck. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Noninvadability implies noncoexistence for a class of cancellative systems

    Czech Academy of Sciences Publication Activity Database

    Swart, Jan M.

    2013-01-01

    Roč. 18, č. 38 (2013), s. 1-12 ISSN 1083-589X R&D Projects: GA ČR GAP201/10/0752 Institutional support: RVO:67985556 Keywords : cancellative system * interface tightness * duality * coexistence * Neuhauser-Pacala model * affine voter model * rebellious voter model * balancing selection * branching * annihilation * parity preservation Subject RIV: BA - General Mathematics Impact factor: 0.627, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/swart-noninvadability implies noncoexistence for a class of cancellative systems.pdf

  6. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  7. Faster than light motion does not imply time travel

    International Nuclear Information System (INIS)

    Andréka, Hajnal; Madarász, Judit X; Németi, István; Székely, Gergely; Stannett, Mike

    2014-01-01

    Seeing the many examples in the literature of causality violations based on faster than light (FTL) signals one naturally thinks that FTL motion leads inevitably to the possibility of time travel. We show that this logical inference is invalid by demonstrating a model, based on (3+1)-dimensional Minkowski spacetime, in which FTL motion is permitted (in every direction without any limitation on speed) yet which does not admit time travel. Moreover, the Principle of Relativity is true in this model in the sense that all observers are equivalent. In short, FTL motion does not imply time travel after all. (paper)

  8. Implied Reading in the Unforgettable Stories of Language Learners

    Directory of Open Access Journals (Sweden)

    Feryal ÇUBUKÇU

    2017-09-01

    Full Text Available Iser is literary theoretician and co-founder of the Constance School of Reception Aesthetics, professor Emeritus of English and Comparative Literature at the University of Constance and the University of California, Irvine. When Iser died in 2007 in his eighty-first year, he was one of the most widely known literary theoreticians in the world. His “implied reading” theory claims that texts can themselves also awaken false expectations, alternately bringing about surprise, joy and frustration, which can be the enlargement of experience. The indeterminacy of the text might yield different responses from different readers. To prove that each implied reading is based on the schemata of the readers, this study aims at analysing the stories told by language learners of Turkish who come from 20 countries and whose ages vary between 18-32. The participants are 65 undergraduate and graduate university students, from African, Asian and Balkan countries, who upon watching “Cinderella” were asked to write about the unforgettable folk story or fairy tale. When their stories are item analysed, the results show that the schematas of the learners shape the way they choose and recount the stories. Leraners of Turkish fill in the gaps throughout the story, form a meaningful bond by pulling information from it, participating in a reciprocal relationship, creating and deriving meaning in an extravaganza of interpretation.

  9. Vertebrate Fossils Imply Paleo-elevations of the Tibetan Plateau

    Science.gov (United States)

    Deng, T.; Wang, X.; Li, Q.; Wu, F.; Wang, S.; Hou, S.

    2017-12-01

    The uplift of the Tibetan Plateau remains unclear, and its paleo-elevation reconstructions are crucial to interpret the geodynamic evolution and to understand the climatic changes in Asia. Uplift histories of the Tibetan Plateau based on different proxies differ considerably, and two viewpoints are pointedly opposing on the paleo-elevation estimations of the Tibetan Plateau. One viewpoint is that the Tibetan Plateau did not strongly uplift to reach its modern elevation until the Late Miocene, but another one, mainly based on stable isotopes, argues that the Tibetan Plateau formed early during the Indo-Asian collision and reached its modern elevation in the Paleogene or by the Middle Miocene. In 1839, Hugh Falconer firstly reported some rhinocerotid fossils collected from the Zanda Basin in Tibet, China and indicated that the Himalayas have uplifted by more than 2,000 m since several million years ago. In recent years, the vertebrate fossils discovered from the Tibetan Plateau and its surrounding areas implied a high plateau since the late Early Miocene. During the Oligocene, giant rhinos lived in northwestern China to the north of the Tibetan Plateau, while they were also distributed in the Indo-Pakistan subcontinent to the south of this plateau, which indicates that the elevation of the Tibetan Plateau was not too high to prevent exchanges of large mammals; giant rhinos, the rhinocerotid Aprotodon, and chalicotheres still dispersed north and south of "Tibetan Plateau". A tropical-subtropical lowland fish fauna was also present in the central part of this plateau during the Late Oligocene, in which Eoanabas thibetana was inferred to be closely related to extant climbing perches from South Asia and Sub-Saharan Africa. In contrast, during the Middle Miocene, the shovel-tusked elephant Platybelodon was found from many localities north of the Tibetan Plateau, while its trace was absent in the Siwaliks of the subcontinent, which implies that the Tibetan Plateau had

  10. Time-to-contact estimation modulated by implied friction.

    Science.gov (United States)

    Yamada, Yuki; Sasaki, Kyoshiro; Miura, Kayo

    2014-01-01

    The present study demonstrated that friction cues for target motion affect time-to-contact (TTC) estimation. A circular target moved in a linear path with a constant velocity and was gradually occluded by a static rectangle. The target moved with forward and backward spins or without spin. Observers were asked to respond at the time when the moving target appeared to pass the occluder. The results showed that TTC was significantly longer in the backward spin condition than in the forward and without-spin conditions. Moreover, similar results were obtained when a sound was used to imply friction. Our findings indicate that the observer's experiential knowledge of motion coupled with friction intuitively modulated their TTC estimation.

  11. Short-Term Market Risks Implied by Weekly Options

    DEFF Research Database (Denmark)

    Andersen, Torben Gustav; Fusari, Nicola; Todorov, Viktor

    a direct way to study volatility and jump risks. Unlike longer-dated options, they are largely insensitive to the risk of intertemporal shifts in the economic environment. Adopting a novel semi-nonparametric approach, we uncover variation in the negative jump tail risk which is not spanned by market......We study short-term market risks implied by weekly S&P 500 index options. The introduction of weekly options has dramatically shifted the maturity profile of traded options over the last five years, with a substantial proportion now having expiry within one week. Such short-dated options provide......" by the level of market volatility and elude standard asset pricing models....

  12. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  13. Limits on rare B decays B implies μ+μ-K± and B implies μ+μ- K*

    International Nuclear Information System (INIS)

    Anway-Wiese, C.

    1995-01-01

    We report on a search for flavor-changing neutral current decays of B mesons into γγK * and γγK± using data obtained in the Collider Detector at Fermilab (CDF) 1992 endash 1993 data taking run. To reduce the amount of background in our data we use precise tracking information from the CDF silicon vertex detector to pinpoint the location of the decay vertex of the B candidate, and accept only events which have a large decay time.We compare this data to a B meson signal obtained in a similar fashion, but where the muon pairs originate from ψ decays, and calculate the relative branching ratios. In the absence of any indication of flavor-changing neutral current decays we set an upper limits of BR(B implies μμK ± ) much-gt 3.5x10 -5 , and BR(B implies μμK * )much-gt 5.1x10 -5 at 90% confidence level, which are consistent with Standard Model expectations but leave little room for non-standard physics. copyright 1995 American Institute of Physics

  14. Privacy-Preserving and Scalable Service Recommendation Based on SimHash in a Distributed Cloud Environment

    Directory of Open Access Journals (Sweden)

    Yanwei Xu

    2017-01-01

    Full Text Available With the increasing volume of web services in the cloud environment, Collaborative Filtering- (CF- based service recommendation has become one of the most effective techniques to alleviate the heavy burden on the service selection decisions of a target user. However, the service recommendation bases, that is, historical service usage data, are often distributed in different cloud platforms. Two challenges are present in such a cross-cloud service recommendation scenario. First, a cloud platform is often not willing to share its data to other cloud platforms due to privacy concerns, which decreases the feasibility of cross-cloud service recommendation severely. Second, the historical service usage data recorded in each cloud platform may update over time, which reduces the recommendation scalability significantly. In view of these two challenges, a novel privacy-preserving and scalable service recommendation approach based on SimHash, named SerRecSimHash, is proposed in this paper. Finally, through a set of experiments deployed on a real distributed service quality dataset WS-DREAM, we validate the feasibility of our proposal in terms of recommendation accuracy and efficiency while guaranteeing privacy-preservation.

  15. Secure Hashing of Dynamic Hand Signatures Using Wavelet-Fourier Compression with BioPhasor Mixing and Discretization

    Directory of Open Access Journals (Sweden)

    Wai Kuan Yip

    2007-01-01

    Full Text Available We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT and discrete fourier transform (DFT. Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs of and for random and skilled forgeries for stolen token (worst case scenario, and for both forgeries in the genuine token (optimal scenario.

  16. Scalable Content Authentication in H.264/SVC Videos Using Perceptual Hashing based on Dempster-Shafer theory

    Directory of Open Access Journals (Sweden)

    Ye Dengpan

    2012-09-01

    Full Text Available The content authenticity of the multimedia delivery is important issue with rapid development and widely used of multimedia technology. Till now many authentication solutions had been proposed, such as cryptology and watermarking based methods. However, in latest heterogeneous network the video stream transmission has been coded in scalable way such as H.264/SVC, there is still no good authentication solution. In this paper, we firstly summarized related works and proposed a scalable content authentication scheme using a ratio of different energy (RDE based perceptual hashing in Q/S dimension, which is used Dempster-Shafer theory and combined with the latest scalable video coding (H.264/SVC construction. The idea of aldquo;sign once and verify in scalable wayardquo; can be realized. Comparing with previous methods, the proposed scheme based on perceptual hashing outperforms previous works in uncertainty (robustness and efficiencies in the H.264/SVC video streams. At last, the experiment results verified the performance of our scheme.

  17. Refined repetitive sequence searches utilizing a fast hash function and cross species information retrievals

    Directory of Open Access Journals (Sweden)

    Reneker Jeff

    2005-05-01

    Full Text Available Abstract Background Searching for small tandem/disperse repetitive DNA sequences streamlines many biomedical research processes. For instance, whole genomic array analysis in yeast has revealed 22 PHO-regulated genes. The promoter regions of all but one of them contain at least one of the two core Pho4p binding sites, CACGTG and CACGTT. In humans, microsatellites play a role in a number of rare neurodegenerative diseases such as spinocerebellar ataxia type 1 (SCA1. SCA1 is a hereditary neurodegenerative disease caused by an expanded CAG repeat in the coding sequence of the gene. In bacterial pathogens, microsatellites are proposed to regulate expression of some virulence factors. For example, bacteria commonly generate intra-strain diversity through phase variation which is strongly associated with virulence determinants. A recent analysis of the complete sequences of the Helicobacter pylori strains 26695 and J99 has identified 46 putative phase-variable genes among the two genomes through their association with homopolymeric tracts and dinucleotide repeats. Life scientists are increasingly interested in studying the function of small sequences of DNA. However, current search algorithms often generate thousands of matches – most of which are irrelevant to the researcher. Results We present our hash function as well as our search algorithm to locate small sequences of DNA within multiple genomes. Our system applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. We discuss our incorporation of the Gene Ontology (GO database into these algorithms. We conduct an exhaustive time analysis of our system for various repetitive sequence lengths. For instance, a search for eight bases of sequence within 3.224 GBases on 49 different chromosomes takes 1.147 seconds on average. To illustrate the relevance of the search results, we conduct a search with and without added annotation terms for the

  18. Stringent Mitigation Policy Implied By Temperature Impacts on Economic Growth

    Science.gov (United States)

    Moore, F.; Turner, D.

    2014-12-01

    Integrated assessment models (IAMs) compare the costs of greenhouse gas mitigation with damages from climate change in order to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained GDP growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth-rates in the Dynamic Integrated Climate and Economy (DICE) model via two pathways, total factor productivity (TFP) growth and capital depreciation. Even under optimistic adaptation assumptions, this damage specification implies that optimal climate policy involves the elimination of emissions in the near future, the stabilization of global temperature change below 2°C, and a social cost of carbon (SCC) an order of magnitude larger than previous estimates. A sensitivity analysis shows that the magnitude of growth effects, the rate of adaptation, and the dynamic interaction between damages from warming and GDP are three critical uncertainties and an important focus for future research.

  19. Collective memory in primate conflict implied by temporal scaling collapse.

    Science.gov (United States)

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  20. Analysis of Federal Subsidies: Implied Price of Carbon

    Energy Technology Data Exchange (ETDEWEB)

    D. Craig Cooper; Thomas Foulke

    2010-10-01

    For informed climate change policy, it is important for decision makers to be able to assess how the costs and benefits of federal energy subsidies are distributed and to be able to have some measure to compare them. One way to do this is to evaluate the implied price of carbon (IPC) for a federal subsidy, or set of subsidies; where the IPC is the cost of the subsidy to the U.S. Treasury divided by the emissions reductions it generated. Subsidies with lower IPC are more cost effective at reducing greenhouse gas emissions, while subsidies with a negative IPC act to increase emissions. While simple in concept, the IPC is difficult to calculate in practice. Calculation of the IPC requires knowledge of (i) the amount of energy associated with the subsidy, (ii) the amount and type of energy that would have been produced in the absence of the subsidy, and (iii) the greenhouse gas emissions associated with both the subsidized energy and the potential replacement energy. These pieces of information are not consistently available for federal subsidies, and there is considerable uncertainty in cases where the information is available. Thus, exact values for the IPC based upon fully consistent standards cannot be calculated with available data. However, it is possible to estimate a range of potential values sufficient for initial comparisons. This study has employed a range of methods to generate “first order” estimates for the IPC of a range of federal subsidies using static methods that do not account for the dynamics of supply and demand. The study demonstrates that, while the IPC value depends upon how the inquiry is framed and the IPC cannot be calculated in a “one size fits all” manner, IPC calculations can provide a valuable perspective for climate policy analysis. IPC values are most useful when calculated within the perspective of a case study, with the method and parameters of the calculation determined by the case. The IPC of different policy measures can

  1. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    Directory of Open Access Journals (Sweden)

    Graham Cormode

    Full Text Available Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines, computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH methods and evaluate four variants in a distributed computing environment (specifically, Hadoop. We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  2. RANCANG BANGUN APLIKASI ANTIVIRUS KOMPUTER DENGAN MENGGUNAKAN METODE SECURE HASH ALGORITHM 1 (SHA1 DAN HEURISTIC STRING

    Directory of Open Access Journals (Sweden)

    I Gusti Made Panji Indrawinatha

    2016-12-01

    Full Text Available Virus komputer merupakan perangkat lunak berbahaya yang dapat merusak data dan menggandakan diri pada sistem komputer. Untuk mendeteksi dan membersihkan virus dari sistem komputer, maka dibuatlah aplikasi antivirus. Dalam mendeteksi berbagai jenis virus sebuah aplikasi antivirus biasanya menggunakan beberapa metode. Pada penelitian ini akan membahas perancangan sebuah aplikasi antivirus menggunakan metode Secure Hash Algorithm 1 (SHA1 dan heuristic string sebagai metode pendeteksian virus. Dari pengujian yang dilakukan diperoleh hasil dimana saat tidak menggunakan heuristic, antivirus hanya mendeteksi 12 file dari 34 file sample virus atau memiliki tingkat akurasi pendeteksian sebesar 35%. sedangkan saat menggunakan heuristic, antivirus berhasil mendeteksi 31 file dari 34 file sample virus atau memiliki tingkat akurasi pendeteksian sebesar 91%.

  3. Meet-in-the-Middle Preimage Attacks on Hash Modes of Generalized Feistel and Misty Schemes with SP Round Function

    Science.gov (United States)

    Moon, Dukjae; Hong, Deukjo; Kwon, Daesung; Hong, Seokhie

    We assume that the domain extender is the Merkle-Damgård (MD) scheme and he message is padded by a ‘1’, and minimum number of ‘0’s, followed by a fixed size length information so that the length of padded message is multiple of block length. Under this assumption, we analyze securities of the hash mode when the compression function follows the Davies-Meyer (DM) scheme and the underlying block cipher is one of the plain Feistel or Misty scheme or the generalized Feistel or Misty schemes with Substitution-Permutation (SP) round function. We do this work based on Meet-in-the-Middle (MitM) preimage attack techniques, and develop several useful initial structures.

  4. NHash: Randomized N-Gram Hashing for Distributed Generation of Validatable Unique Study Identifiers in Multicenter Research.

    Science.gov (United States)

    Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D; Cui, Licong

    2015-11-10

    A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f1, f2, ..., fk. The input for each function fi has 3 components: a random number r, an integer n, and input data m. The result, fi(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f1(r1, n1, m1), f2(r2, n2, m2), ..., fk(rk, nk, mk). In the second phase, the intermediate string generated in Phase 1 is encrypted using techniques such as shift cipher. The result

  5. SECOM: A novel hash seed and community detection based-approach for genome-scale protein domain identification

    KAUST Repository

    Fan, Ming

    2012-06-28

    With rapid advances in the development of DNA sequencing technologies, a plethora of high-throughput genome and proteome data from a diverse spectrum of organisms have been generated. The functional annotation and evolutionary history of proteins are usually inferred from domains predicted from the genome sequences. Traditional database-based domain prediction methods cannot identify novel domains, however, and alignment-based methods, which look for recurring segments in the proteome, are computationally demanding. Here, we propose a novel genome-wide domain prediction method, SECOM. Instead of conducting all-against-all sequence alignment, SECOM first indexes all the proteins in the genome by using a hash seed function. Local similarity can thus be detected and encoded into a graph structure, in which each node represents a protein sequence and each edge weight represents the shared hash seeds between the two nodes. SECOM then formulates the domain prediction problem as an overlapping community-finding problem in this graph. A backward graph percolation algorithm that efficiently identifies the domains is proposed. We tested SECOM on five recently sequenced genomes of aquatic animals. Our tests demonstrated that SECOM was able to identify most of the known domains identified by InterProScan. When compared with the alignment-based method, SECOM showed higher sensitivity in detecting putative novel domains, while it was also three orders of magnitude faster. For example, SECOM was able to predict a novel sponge-specific domain in nucleoside-triphosphatase (NTPases). Furthermore, SECOM discovered two novel domains, likely of bacterial origin, that are taxonomically restricted to sea anemone and hydra. SECOM is an open-source program and available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2012 Fan et al.

  6. SECOM: A novel hash seed and community detection based-approach for genome-scale protein domain identification

    KAUST Repository

    Fan, Ming; Wong, Ka-Chun; Ryu, Tae Woo; Ravasi, Timothy; Gao, Xin

    2012-01-01

    With rapid advances in the development of DNA sequencing technologies, a plethora of high-throughput genome and proteome data from a diverse spectrum of organisms have been generated. The functional annotation and evolutionary history of proteins are usually inferred from domains predicted from the genome sequences. Traditional database-based domain prediction methods cannot identify novel domains, however, and alignment-based methods, which look for recurring segments in the proteome, are computationally demanding. Here, we propose a novel genome-wide domain prediction method, SECOM. Instead of conducting all-against-all sequence alignment, SECOM first indexes all the proteins in the genome by using a hash seed function. Local similarity can thus be detected and encoded into a graph structure, in which each node represents a protein sequence and each edge weight represents the shared hash seeds between the two nodes. SECOM then formulates the domain prediction problem as an overlapping community-finding problem in this graph. A backward graph percolation algorithm that efficiently identifies the domains is proposed. We tested SECOM on five recently sequenced genomes of aquatic animals. Our tests demonstrated that SECOM was able to identify most of the known domains identified by InterProScan. When compared with the alignment-based method, SECOM showed higher sensitivity in detecting putative novel domains, while it was also three orders of magnitude faster. For example, SECOM was able to predict a novel sponge-specific domain in nucleoside-triphosphatase (NTPases). Furthermore, SECOM discovered two novel domains, likely of bacterial origin, that are taxonomically restricted to sea anemone and hydra. SECOM is an open-source program and available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2012 Fan et al.

  7. GATHERING TECHNOLOGY BASED ON REGEX WEB PAGE DENOISING HASH ALIGNMENTS WEB CRAWLER WITHOUT LANDING THE MICRO - BLOG ABUNDANT%基于 Regex 网页去噪 Hash 比对的网络爬虫无登陆微博采集技术

    Institute of Scientific and Technical Information of China (English)

    陈宇; 孟凡龙; 刘培玉; 朱振方

    2015-01-01

    针对当前微博采集无精确去噪方法和微博无法无登陆采集现象,笔者提出了基于 Regex 网页去噪 Hash 对比的网络爬虫采集方案并利用插件采集实现了无登陆采集。该方法通过 Regex 构建 DFA 和 NFA 模型来去除网页噪声,通过 Hash 对比对确定采集页面,并通过插件权限提升实现无登陆技术。有效的避免了 Hash 值的变化与网页内容变化产生偏离的现象,解决了网络爬虫虚拟登录时多次对 URL 采集造成的身份认证问题。实验表明,该方法可以实时快速的获取微博信息,为舆情数据分析提供批量精准的数据。%In view of the current micro - blog acquisition without accurate denoising method and unable abundantly the non - debarkation gathering phenomenon,we present a web crawler acquisition scheme of Regex Webpage denoising Hash based on comparison and realize no landing collection by using plug - in acquisition. The method of Regex to construct DFA and NFA model to remove Webpage noise,comparing the Hash to determine the collection page,and the plug - in privilege without landing techniques are presented. Experiments show that,this method quickly gets micro - blog information in real time,and provides,accurate data for the mass public opinion data analysis.

  8. Development of a cellulose-based insulating composite material for green buildings: Case of treated organic waste (paper, cardboard, hash)

    Science.gov (United States)

    Ouargui, Ahmed; Belouaggadia, Naoual; Elbouari, Abdeslam; Ezzine, Mohammed

    2018-05-01

    Buildings are responsible for 36% of the final energy consumption in Morocco [1-2], and a reduction of this energy consumption of buildings is a priority for the kingdom in order to reach its energy saving goals. One of the most effective actions to reduce energy consumption is the selection and development of innovative and efficient building materials [3]. In this work, we present an experimental study of the effect of adding treated organic waste (paper, cardboard, hash) on mechanical and thermal properties of cement and clay bricks. Thermal conductivity, specific heat and mechanical resistance were investigated in terms of content and size additives. Soaking time and drying temperature were also taken into account. The results reveal that thermal conductivity decreases as well in the case of the paper-cement mixture as that of the paper-clay and seems to stabilize around 40%. In the case of the composite paper-cement, it is found that, for an additives quantity exceeding 15%, the compressive strength exceeds the standard for the hollow non-load bearing masonry. However, the case of paper-clay mixture seems to give more interesting results, related to the compressive strength, for a mass composition of 15% in paper. Given the positive results achieved, it seems possible to use these composites for the construction of walls, ceilings and roofs of housing while minimizing the energy consumption of the building.

  9. GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments

    Science.gov (United States)

    Chen, Zhanlong; Wu, Xin-cai; Wu, Liang

    2008-12-01

    Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the

  10. Implied and realized volatility in the cross-section of equity options

    DEFF Research Database (Denmark)

    Ammann, Manuel; Skovmand, David; Verhofen, Michael

    2009-01-01

    Using a complete sample of US equity options, we analyze patterns of implied volatility in the cross-section of equity options with respect to stock characteristics. We find that high-beta stocks, small stocks, stocks with a low-market-to-book ratio, and non-momentum stocks trade at higher implied...

  11. 76 FR 55904 - Michael J. Donahue; Notice of Termination of Exemption By Implied Surrender and Soliciting...

    Science.gov (United States)

    2011-09-09

    .... Donahue; Notice of Termination of Exemption By Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 6649-008. c... Commission reserves the right to revoke an exemption if any term or condition of the exemption is violated...

  12. 76 FR 58264 - Michael J. Donahue; Notice of Termination of Exemption by Implied Surrender and Soliciting...

    Science.gov (United States)

    2011-09-20

    .... Donahue; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 6649-008. c... Commission reserves the right to revoke an exemption if any term or condition of the exemption is violated...

  13. 77 FR 2057 - Aquamac Corporation; Notice of Termination of License by Implied Surrender and Soliciting...

    Science.gov (United States)

    2012-01-13

    ... Corporation; Notice of Termination of License by Implied Surrender and Soliciting Comments and Motions To.... Type of Proceeding: Termination of License by Implied Surrender. b. Project No.: 2927-006. c. Date... authorized, the licensee is in violation of the terms and conditions of the license. l. This notice is...

  14. 77 FR 73653 - Milburnie Hydro Inc.; Notice of Termination of Exemption by Implied Surrender and Soliciting...

    Science.gov (United States)

    2012-12-11

    ... Inc.; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 7910-006. c... Commission reserves the right to revoke an exemption if any term or condition of the exemption is violated...

  15. 76 FR 55903 - Missisquoi River Technologies; Notice of Termination of Exemption By Implied Surrender and...

    Science.gov (United States)

    2011-09-09

    ... Technologies; Notice of Termination of Exemption By Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 10172-038..., among other things, that the Commission reserves the right to revoke an exemption if any term or...

  16. 76 FR 7840 - American Hydro Power Company; Notice of Termination of Exemption by Implied Surrender and...

    Science.gov (United States)

    2011-02-11

    ... Power Company; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments, Protests... the Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No... if any term or condition of the exemption is violated. The project has not operated since 2004, and...

  17. Advising Students or Practicing Law: The Formation of Implied Attorney-Client Relationships with Students

    Science.gov (United States)

    Sheridan, Patricia M.

    2014-01-01

    An attorney-client relationship is traditionally created when both parties formally enter into an express agreement regarding the terms of representation and the payment of fees. There are certain circumstances, however, where the attorney-client relationship can be implied from the parties' conduct. An implied attorney-client relationship may…

  18. Fractional Black–Scholes option pricing, volatility calibration and implied Hurst exponents in South African context

    Directory of Open Access Journals (Sweden)

    Emlyn Flint

    2017-03-01

    Full Text Available Background: Contingent claims on underlying assets are typically priced under a framework that assumes, inter alia, that the log returns of the underlying asset are normally distributed. However, many researchers have shown that this assumption is violated in practice. Such violations include the statistical properties of heavy tails, volatility clustering, leptokurtosis and long memory. This paper considers the pricing of contingent claims when the underlying is assumed to display long memory, an issue that has heretofore not received much attention. Aim: We address several theoretical and practical issues in option pricing and implied volatility calibration in a fractional Black–Scholes market. We introduce a novel eight-parameter fractional Black–Scholes-inspired (FBSI model for the implied volatility surface, and consider in depth the issue of calibration. One of the main benefits of such a model is that it allows one to decompose implied volatility into an independent long-memory component – captured by an implied Hurst exponent – and a conditional implied volatility component. Such a decomposition has useful applications in the areas of derivatives trading, risk management, delta hedging and dynamic asset allocation. Setting: The proposed FBSI volatility model is calibrated to South African equity index options data as well as South African Rand/American Dollar currency options data. However, given the focus on the theoretical development of the model, the results in this paper are applicable across all financial markets. Methods: The FBSI model essentially combines a deterministic function form of the 1-year implied volatility skew with a separate deterministic function for the implied Hurst exponent, thus allowing one to model both observed implied volatility surfaces as well as decompose them into independent volatility and long-memory components respectively. Calibration of the model makes use of a quasi-explicit weighted

  19. The implied volatility of U.S. interest rates: evidence from callable U. S. Treasuries

    OpenAIRE

    Robert R. Bliss; Ehud I. Ronn

    1995-01-01

    The prices for callable U.S. Treasury securities provide the sole source of evidence concerning the implied volatility of interest rates over the extended 1926-1994 period. This paper uses the prices of callable as well as non-callable Treasury instruments to estimate implied interest rate volatilities for the past sixty years, and, for the more recent 1989-1994 period, the cross-sectional term structures of implied interest rate volatility. We utilize these estimates to perform cross-section...

  20. Analysing Discursive Practices in Legal Research : How a Single Remark Implies a Paradigm

    NARCIS (Netherlands)

    van den Hoven, P.J.

    2017-01-01

    Different linguistic theories of meaning (semantic theories) imply different methods to discuss meaning. Discussing meaning is what legal practitioners frequently do to decide legal issues and, subsequently, legal scholars analyse in their studies these discursive practices of parties, judges and

  1. Market-implied risk-neutral probabilities, actual probabilities, credit risk and news

    Directory of Open Access Journals (Sweden)

    Shashidhar Murthy

    2011-09-01

    Full Text Available Motivated by the credit crisis, this paper investigates links between risk-neutral probabilities of default implied by markets (e.g. from yield spreads and their actual counterparts (e.g. from ratings. It discusses differences between the two and clarifies underlying economic intuition using simple representations of credit risk pricing. Observed large differences across bonds in the ratio of the two probabilities are shown to imply that apparently safer securities can be more sensitive to news.

  2. Can we replace CAPM and the Three-Factor model with Implied Cost of Capital?

    OpenAIRE

    Löthman, Robert; Pettersson, Eric

    2014-01-01

    Researchers criticize predominant expected return models for being imprecise and based on fundamentally flawed assumptions. This dissertation evaluates Implied Cost of Capital, CAPM and the Three-Factor model abilities to estimate returns. We study each models expected return association to realized return and test for abnormal returns. Our sample covers the period 2000 to 2012 and includes 2916 US firms. We find that Implied Cost of Capital has a stronger association with realized returns th...

  3. LEFT-WING ASYMPTOTICS OF THE IMPLIED VOLATILITY IN THE PRESENCE OF ATOMS

    OpenAIRE

    ARCHIL GULISASHVILI

    2015-01-01

    The paper considers the asymptotic behavior of the implied volatility in stochastic asset price models with atoms. In such models, the asset price distribution has a singular component at zero. Examples of models with atoms include the constant elasticity of variance (CEV) model, jump-to-default models, and stochastic models described by processes stopped at the first hitting time of zero. For models with atoms, the behavior of the implied volatility at large strikes is similar to that in mod...

  4. Relationship of the change in implied volatility with the underlying equity index return in Thailand

    OpenAIRE

    Thakolsri, Supachock; Sethapramote, Yuthana; Jiranyakul, Komain

    2016-01-01

    In this study, we examine the relationship between the change in implied volatility index and the underlying stock index return in the Thai stock market. The data used are daily data during November 2010 to December 2013. The regression analysis is performed on stationary series. The empirical results reveal that there is evidence of a significantly negative and asymmetric relationship between the underlying stock index return and the change in implied volatility. The finding in this study gi...

  5. Modeling and Forecasting the Implied Volatility of the WIG20 Index

    OpenAIRE

    Buszkowska-Khemissi, Eliza; Płuciennik, Piotr

    2007-01-01

    The implied volatility is one of the most important notions in the financial market. It informs about the volatility forecasted by the participans of the market. In this paper we calculate the daily implied volatility from options on the WIG20 index. First we test the long memory property of the time series obtained in such a way, and then we model and forcast it as ARFIMA process

  6. Bayesian Forecasting of Options Prices: A Natural Framework for Pooling Historical and Implied Volatiltiy Information

    OpenAIRE

    Darsinos, T.; Satchell, S.E.

    2001-01-01

    Bayesian statistical methods are naturally oriented towards pooling in a rigorous way information from separate sources. It has been suggested that both historical and implied volatilities convey information about future volatility. However, typically in the literature implied and return volatility series are fed separately into models to provide rival forecasts of volatility or options prices. We develop a formal Bayesian framework where we can merge the backward looking information as r...

  7. Motor mapping of implied actions during perception of emotional body language.

    Science.gov (United States)

    Borgomaneri, Sara; Gazzola, Valeria; Avenanti, Alessio

    2012-04-01

    Perceiving and understanding emotional cues is critical for survival. Using the International Affective Picture System (IAPS) previous TMS studies have found that watching humans in emotional pictures increases motor excitability relative to seeing landscapes or household objects, suggesting that emotional cues may prime the body for action. Here we tested whether motor facilitation to emotional pictures may reflect the simulation of the human motor behavior implied in the pictures occurring independently of its emotional valence. Motor-evoked potentials (MEPs) to single-pulse TMS of the left motor cortex were recorded from hand muscles during observation and categorization of emotional and neutral pictures. In experiment 1 participants watched neutral, positive and negative IAPS stimuli, while in experiment 2, they watched pictures depicting human emotional (joyful, fearful), neutral body movements and neutral static postures. Experiment 1 confirms the increase in excitability for emotional IAPS stimuli found in previous research and shows, however, that more implied motion is perceived in emotional relative to neutral scenes. Experiment 2 shows that motor excitability and implied motion scores for emotional and neutral body actions were comparable and greater than for static body postures. In keeping with embodied simulation theories, motor response to emotional pictures may reflect the simulation of the action implied in the emotional scenes. Action simulation may occur independently of whether the observed implied action carries emotional or neutral meanings. Our study suggests the need of controlling implied motion when exploring motor response to emotional pictures of humans. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. A chimeric fusion of the hASH1 and EZH2 promoters mediates high and specific reporter and suicide gene expression and cytotoxicity in small cell lung cancer cells

    DEFF Research Database (Denmark)

    Poulsen, T.T.; Pedersen, N.; Juel, H.

    2008-01-01

    Transcriptionally targeted gene therapy is a promising experimental modality for treatment of systemic malignancies such as small cell lung cancer (SCLC). We have identified the human achaete-scute homolog 1 (hASH1) and enhancer of zeste homolog 2 (EZH2) genes as highly upregulated in SCLC compar...

  9. Implied adjusted volatility functions: Empirical evidence from Australian index option market

    Science.gov (United States)

    Harun, Hanani Farhah; Hafizah, Mimi

    2015-02-01

    This study aims to investigate the implied adjusted volatility functions using the different Leland option pricing models and to assess whether the use of the specified implied adjusted volatility function can lead to an improvement in option valuation accuracy. The implied adjusted volatility is investigated in the context of Standard and Poor/Australian Stock Exchange (S&P/ASX) 200 index options over the course of 2001-2010, which covers the global financial crisis in the mid-2007 until the end of 2008. Both in- and out-of-sample resulted in approximately similar pricing error along the different Leland models. Results indicate that symmetric and asymmetric models of both moneyness ratio and logarithmic transformation of moneyness provide the overall best result in both during and post-crisis periods. We find that in the different period of interval (pre-, during and post-crisis) is subject to a different implied adjusted volatility function which best explains the index options. Hence, it is tremendously important to identify the intervals beforehand in investigating the implied adjusted volatility function.

  10. Analysing Discursive Practices in Legal Research: How a Single Remark Implies a Paradigm

    Directory of Open Access Journals (Sweden)

    Paul van den Hoven

    2017-12-01

    Full Text Available Different linguistic theories of meaning (semantic theories imply different methods to discuss meaning. Discussing meaning is what legal practitioners frequently do to decide legal issues and, subsequently, legal scholars analyse in their studies these discursive practices of parties, judges and legal experts. Such scholarly analysis reveals a methodical choice on how to discuss meaning and therefore implies positioning oneself towards a semantic theory of meaning, whether the scholar is aware of this or not. Legal practitioners may not be bound to be consistent in their commitment to semantic theories, as their task is to decide legal issues. Legal scholars, however, should be consistent because commitment to a semantic theory implies a distinct position towards important legal theoretical doctrines. In this paper three examples are discussed that require an articulated position of the legal scholar because the discursive practices of legal practitioners show inconsistencies. For each of these examples it can be shown that a scholar’s methodic choice implies commitment to a specific semantic theory, and that adopting such a theory implies a distinct position towards the meaning of the Rule of Law, the separation of powers doctrine and the institutional position of the judge.

  11. Hashing, Randomness and Dictionaries

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    to the similarity to a bookshelf dictionary, which contains a set of words and has an explanation associated with each word. In the static version of the problem the set is fixed, whereas in the dynamic version, insertions and deletions of elements are possible. The approach taken is that of the theoretical...

  12. Derandomization, Hashing and Expanders

    DEFF Research Database (Denmark)

    Ruzic, Milan

    . The central question in this area of computational complexity is \\P=BPP?". Instead of derandomizing whole complexity classes, one may work on derandomizing concrete problems. This approach trades generality for possibility of having much better performance bounds. There are a few common techniques...

  13. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  14. Moment generating functions and Normalized implied volatilities: unification and extension via Fukasawa's pricing formula

    OpenAIRE

    De Marco, Stefano; Martini, Claude

    2017-01-01

    We extend the model-free formula of [Fukasawa 2012] for $\\mathbb E[\\Psi(X_T)]$, where $X_T=\\log S_T/F$ is the log-price of an asset, to functions $\\Psi$ of exponential growth. The resulting integral representation is written in terms of normalized implied volatilities. Just as Fukasawa's work provides rigourous ground for Chriss and Morokoff's (1999) model-free formula for the log-contract (related to the Variance swap implied variance), we prove an expression for the moment generating functi...

  15. An Hilbert space approach for a class of arbitrage free implied volatilities models

    OpenAIRE

    Brace, A.; Fabbri, G.; Goldys, B.

    2007-01-01

    We present an Hilbert space formulation for a set of implied volatility models introduced in \\cite{BraceGoldys01} in which the authors studied conditions for a family of European call options, varying the maturing time and the strike price $T$ an $K$, to be arbitrage free. The arbitrage free conditions give a system of stochastic PDEs for the evolution of the implied volatility surface ${\\hat\\sigma}_t(T,K)$. We will focus on the family obtained fixing a strike $K$ and varying $T$. In order to...

  16. Size Does Matter: Implied Object Size is Mentally Simulated During Language Comprehension

    NARCIS (Netherlands)

    de Koning, Bjorn B.; Wassenburg, Stephanie I.; Bos, Lisanne T.; Van der Schoot, Menno

    2017-01-01

    Embodied theories of language comprehension propose that readers construct a mental simulation of described objects that contains perceptual characteristics of their real-world referents. The present study is the first to investigate directly whether implied object size is mentally simulated during

  17. The Effect of Implied Performer Age and Group Membership on Evaluations of Music Performances

    Science.gov (United States)

    Harrington, Ann M.

    2018-01-01

    This study examined the effects of implied performer age and group membership on listeners' evaluations of music performances. Undergraduate music majors (n = 23), nonmusic majors (n = 17), and members of a New Horizons ensemble (n = 16) were presented with six 30-second excerpts of concert band performances. Excerpts were presented to all…

  18. Covariance of time-ordered products implies local commutativity of fields

    International Nuclear Information System (INIS)

    Greenberg, O.W.

    2006-01-01

    We formulate Lorentz covariance of a quantum field theory in terms of covariance of time-ordered products (or other Green's functions). This formulation of Lorentz covariance implies spacelike local commutativity or anticommutativity of fields, sometimes called microscopic causality or microcausality. With this formulation microcausality does not have to be taken as a separate assumption

  19. Latent Integrated Stochastic Volatility, Realized Volatility, and Implied Volatility: A State Space Approach

    DEFF Research Database (Denmark)

    Bach, Christian; Christensen, Bent Jesper

    process is downward biased. Implied volatility performs better than any of the alternative realized measures when forecasting future integrated volatility. The results are largely similar across the stock market (S&P 500), bond market (30-year U.S. T-bond), and foreign currency exchange market ($/£ )....

  20. No, Virginia, It's Not True What They Say About Publicity's "Implied Third-Party Endorsement" Effect.

    Science.gov (United States)

    Hallahan, Kirk

    1999-01-01

    Re-examines "implied third-party endorsement" as an explanation of publicity's effectiveness. Argues that any effect involves inferences by audience members who use biased processing that favors news and disfavors advertising. Suggests that the presentation of information as news is not necessarily perceived by audiences as an…

  1. Implied Volatility of Interest Rate Options: An Empirical Investigation of the Market Model

    DEFF Research Database (Denmark)

    Christiansen, Charlotte; Hansen, Charlotte Strunk

    2002-01-01

    We analyze the empirical properties of the volatility implied in options on the 13-week US Treasury bill rate. These options have not been studied previously. It is shown that a European style put option on the interest rate is equivalent to a call option on a zero-coupon bond. We apply the LIBOR...

  2. Level Shifts in Volatility and the Implied-Realized Volatility Relation

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; de Magistris, Paolo Santucci

    We propose a simple model in which realized stock market return volatility and implied volatility backed out of option prices are subject to common level shifts corresponding to movements between bull and bear markets. The model is estimated using the Kalman filter in a generalization to the mult......We propose a simple model in which realized stock market return volatility and implied volatility backed out of option prices are subject to common level shifts corresponding to movements between bull and bear markets. The model is estimated using the Kalman filter in a generalization...... to the multivariate case of the univariate level shift technique by Lu and Perron (2008). An application to the S&P500 index and a simulation experiment show that the recently documented empirical properties of strong persistence in volatility and forecastability of future realized volatility from current implied...... volatility, which have been interpreted as long memory (or fractional integration) in volatility and fractional cointegration between implied and realized volatility, are accounted for by occasional common level shifts....

  3. Asymptotic Expansions of the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Cyril Grunspan

    2011-01-01

    We invert the Black-Scholes formula. We consider the cases low strike, large strike, short maturity and large maturity. We give explicitly the first 5 terms of the expansions. A method to compute all the terms by induction is also given. At the money, we have a closed form formula for implied lognormal volatility in terms of a power series in call price.

  4. Aplikasi Algoritma Biseksi dan Newton-Raphson dalam Menaksir Nilai Volatilitas Implied

    Directory of Open Access Journals (Sweden)

    Komang Dharmawan

    2012-11-01

    Full Text Available Volatilitas adalah suatu besaran yang mengukuran seberapa jauh suatu harga sahambergerak dalam suatu periode tertentu dapat juga diartikan sebagai persentase simpanganbaku dari perubahan harga harian suatu saham. Menurut teori yang dikembangkan oleh Black-Scholes in 1973, semua harga opsi dengan ’underlying asset’ dan waktu jatuh tempo yang samatetapi memiliki nilai exercise yang berbeda akan memiliki nilai volatilitas implied yang sama.Model Black-Scholes dapat dipakai mengestimasi nilai volatilitas implied dari suatu sahamdengan mencari sulusi numerik dari persamaan invers dari model Black-Scholes. Makalah inimendemonstrasikan bagaimana menghitung nilai volatilitas implied suatu saham dengan mengasumsikanbahwa model Black-schole adalah benar dan suatu kontrak opsi dengan denganumur kontrak yang sama akan memiliki harga yang sama. Menggunakan data harga opsi SonyCorporation (SNE, Cisco Systems, Inc (CSCO, dan Canon, Inc (CNJ diperoleh bahwa, ImpliedVolatility memberikan harga yang lebih murah dibandingkan dengan harga opsi darivolatilitas yang dihitung dari data historis. Selain itu, dari hasil iterasi yang diperoleh, metodeNewton-Raphson lebih cepat konvergen dibandingkan dengan metode Bisection.

  5. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  6. Revisiting the long memory dynamics of implied-realized volatility relation: A new evidence from wavelet band spectrum regression

    OpenAIRE

    Barunik, Jozef; Barunikova, Michaela

    2015-01-01

    This paper revisits the fractional co-integrating relationship between ex-ante implied volatility and ex-post realized volatility. Previous studies on stock index options have found biases and inefficiencies in implied volatility as a forecast of future volatility. It is argued that the concept of corridor implied volatility (CIV) should be used instead of the popular model-free option-implied volatility (MFIV) when assessing the relation as the latter may introduce bias to the estimation. In...

  7. Does “quorum sensing” imply a new type of biological information?

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio

    2002-01-01

    When dealing with biological communication and information, unifying concepts are necessary in order to couple the different “codes” that are being inductively “cracked” and defined at different emergent and “de-emergent” levels of the biological hierarchy. In this paper I compare the type...... of biological information implied by genetic information with that implied in the concept of “quorum sensing” (which refers to a prokaryotic cell-to-cell communication system) in order to explore if such integration is being achieved. I use the Lux operon paradigm and the Vibrio fischeri – Euprymna scolopes...... symbiotic partnership to exemplify the emergence of informational contexts along the biological hierarchy (from molecules to ecologies). I suggest that the biosemiotic epistemological framework can play an integra¬tive role to overcome the limits of dyadic mechanistic descriptions when relating...

  8. Individual chaos implies collective chaos for weakly mixing discrete dynamical systems

    International Nuclear Information System (INIS)

    Liao Gongfu; Ma Xianfeng; Wang Lidong

    2007-01-01

    Let X be a metric space (X,f) a discrete dynamical system, where f:X->X is a continuous function. Let f-bar denote the natural extension of f to the space of all non-empty compact subsets of X endowed with Hausdorff metric induced by d. In this paper we investigate some dynamical properties of f and f-bar . It is proved that f is weakly mixing (mixing) if and only if f-bar is weakly mixing (mixing, respectively). From this, we deduce that weak-mixing of f implies transitivity of f-bar , further, if f is mixing or weakly mixing, then chaoticity of f (individual chaos) implies chaoticity of f-bar (collective chaos) and if X is a closed interval then f-bar is chaotic (in the sense of Devaney) if and only if f is weakly mixing

  9. PERBANDINGAN KEEFISIENAN METODE NEWTON-RAPHSON, METODE SECANT, DAN METODE BISECTION DALAM MENGESTIMASI IMPLIED VOLATILITIES SAHAM

    Directory of Open Access Journals (Sweden)

    IDA AYU EGA RAHAYUNI

    2016-01-01

    Full Text Available Black-Scholes model suggests that volatility is constant or fixed during the life time of the option certainly known. However, this does not fit with what happen in the real market. Therefore, the volatility has to be estimated. Implied Volatility is the etimated volatility from a market mechanism that is considered as a reasonable way to assess the volatility's value. This study was aimed to compare the Newton-Raphson, Secant, and Bisection method, in estimating the stock volatility value of PT Telkom Indonesia Tbk (TLK. It found that the three methods have the same Implied Volatilities, where Newton-Raphson method gained roots more rapidly than the two others, and it has the smallest relative error greater than Secant and Bisection methods.

  10. Selecting the Best Forecasting-Implied Volatility Model Using Genetic Programming

    Directory of Open Access Journals (Sweden)

    Wafa Abdelmalek

    2009-01-01

    Full Text Available The volatility is a crucial variable in option pricing and hedging strategies. The aim of this paper is to provide some initial evidence of the empirical relevance of genetic programming to volatility's forecasting. By using real data from S&P500 index options, the genetic programming's ability to forecast Black and Scholes-implied volatility is compared between time series samples and moneyness-time to maturity classes. Total and out-of-sample mean squared errors are used as forecasting's performance measures. Comparisons reveal that the time series model seems to be more accurate in forecasting-implied volatility than moneyness time to maturity models. Overall, results are strongly encouraging and suggest that the genetic programming approach works well in solving financial problems.

  11. The Short-Time Behaviour of VIX Implied Volatilities in a Multifactor Stochastic Volatility Framework

    DEFF Research Database (Denmark)

    Barletta, Andrea; Nicolato, Elisa; Pagliarani, Stefano

    error bounds for VIX futures, options and implied volatilities. In particular, we derive exact asymptotic results for VIX implied volatilities, and their sensitivities, in the joint limit of short time-to-maturity and small log-moneyness. The obtained expansions are explicit, based on elementary...... approximations of equity (SPX) options. However, the generalizations needed to cover the case of VIX options are by no means straightforward as the dynamics of the underlying VIX futures are not explicitly known. To illustrate the accuracy of our technique, we provide numerical implementations for a selection...... functions and they neatly uncover how the VIX skew depends on the specific choice of the volatility and the vol-of-vol processes. Our results are based on perturbation techniques applied to the infinitesimal generator of the underlying process. This methodology has been previously adopted to derive...

  12. Women’s stories implying aspects of anti-Judaism with Christological depiction in Matthew

    Directory of Open Access Journals (Sweden)

    In-Cheol Shin

    2014-10-01

    Full Text Available This study focuses on the women’s stories that imply aspects of anti-Judaism within Matthew’s depiction of Christology, which is called Matthew’s theology. In fact, Matthew’s community opposed the Jewish system and Jewish leaders and parted from its parent body. Even though Matthew’s community was still similar to the Jewish system, it had significant differences as well. The study discusses these aspects of anti-Judaism that appear in the woman’s stories that include the genealogy of Jesus, the haemorrhaging woman, the Canaanite woman, and the women at the cross and Jesus’ tomb. This study shows proof and examples of anti-Judaism within the stories and thoroughly analyses them. Therefore, it can be confirmed that the women’s stories imply aspects of anti-Judaism with Christological depictions by Matthew’s theological tendency.

  13. No-Arbitrage Condition of Option Implied Volatility and Bandwidth Selection

    Czech Academy of Sciences Publication Activity Database

    Kopa, Miloš; Tichý, T.

    2014-01-01

    Roč. 17, č. 3 (2014), s. 751-755 ISSN 0972-0073 R&D Projects: GA ČR(CZ) GA13-25911S Institutional support: RVO:67985556 Keywords : Option Pricing * Implied Volatility * DAX Index * Local polynomial smoothing Subject RIV: AH - Economics Impact factor: 0.222, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kopa-0429805.pdf

  14. HISTORICAL AND IMPLIED VOLATILITY: AN INVESTIGATION INTO NSE NIFTY FUTURES AND OPTIONS

    OpenAIRE

    N R Parasuraman; P.Janaki Ramudu

    2011-01-01

    The broad objective of the paper is to have an understanding of the movement of volatility over a fair period in respect of the market portfolio. Also, it enables an understanding on how divergent the implied volatility has been from this estimate. It uses Volatility Cone, Volatility Smile and Volatility Surface as the parameters. The study takes different rolling periods percentiles of volatility. Hoadley Options Calculator is used for calculation and analysis purpose. The study empirically...

  15. Portfolio Optimization under Local-Stochastic Volatility: Coefficient Taylor Series Approximations & Implied Sharpe Ratio

    OpenAIRE

    Lorig, Matthew; Sircar, Ronnie

    2015-01-01

    We study the finite horizon Merton portfolio optimization problem in a general local-stochastic volatility setting. Using model coefficient expansion techniques, we derive approximations for the both the value function and the optimal investment strategy. We also analyze the `implied Sharpe ratio' and derive a series approximation for this quantity. The zeroth-order approximation of the value function and optimal investment strategy correspond to those obtained by Merton (1969) when the risky...

  16. Predictable dynamics in implied volatility smirk slope : evidence from the S&P 500 options

    OpenAIRE

    Onan, Mustafa

    2012-01-01

    Ankara : The Department of Management, İhsan Doğramacı Bilkent University, 2012. Thesis (Master's) -- Bilkent University, 2012. Includes bibliographical references. This study aims to investigate whether there are predictable patterns in the dynamics of implied volatility smirk slopes extracted from the intraday market prices of S&P 500 index options. I compare forecasts obtained from a short memory ARMA model and a long memory ARFIMA model within an out-of-sample context ov...

  17. How "ought" exceeds but implies "can": Description and encouragement in moral judgment.

    Science.gov (United States)

    Turri, John

    2017-11-01

    This paper tests a theory about the relationship between two important topics in moral philosophy and psychology. One topic is the function of normative language, specifically claims that one "ought" to do something. Do these claims function to describe moral responsibilities, encourage specific behavior, or both? The other topic is the relationship between saying that one "ought" to do something and one's ability to do it. In what respect, if any, does what one "ought" to do exceed what one "can" do? The theory tested here has two parts: (1) "ought" claims function to both describe responsibilities and encourage people to fulfill them (the dual-function hypothesis); (2) the two functions relate differently to ability, because the encouragement function is limited by the person's ability, but the descriptive function is not (the interaction hypothesis). If this theory is correct, then in one respect "ought implies can" is false because people have responsibilities that exceed their abilities. But in another respect "ought implies can" is legitimate because it is not worthwhile to encourage people to do things that exceed their ability. Results from two behavioral experiments support the theory that "ought" exceeds but implies "can." Results from a third experiment provide further evidence regarding an "ought" claim's primary function and how contextual features can affect the interpretation of its functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Implied and Local Volatility Surfaces for South African Index and Foreign Exchange Options

    Directory of Open Access Journals (Sweden)

    Antonie Kotzé

    2015-01-01

    Full Text Available Certain exotic options cannot be valued using closed-form solutions or even by numerical methods assuming constant volatility. Many exotics are priced in a local volatility framework. Pricing under local volatility has become a field of extensive research in finance, and various models are proposed in order to overcome the shortcomings of the Black-Scholes model that assumes a constant volatility. The Johannesburg Stock Exchange (JSE lists exotic options on its Can-Do platform. Most exotic options listed on the JSE’s derivative exchanges are valued by local volatility models. These models needs a local volatility surface. Dupire derived a mapping from implied volatilities to local volatilities. The JSE uses this mapping in generating the relevant local volatility surfaces and further uses Monte Carlo and Finite Difference methods when pricing exotic options. In this document we discuss various practical issues that influence the successful construction of implied and local volatility surfaces such that pricing engines can be implemented successfully. We focus on arbitrage-free conditions and the choice of calibrating functionals. We illustrate our methodologies by studying the implied and local volatility surfaces of South African equity index and foreign exchange options.

  19. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  20. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  1. Allocentrically implied target locations are updated in an eye-centred reference frame.

    Science.gov (United States)

    Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P

    2012-04-18

    When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Directory of Open Access Journals (Sweden)

    Ji Chul Kim

    2017-05-01

    Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  3. A reporting system for endometrial cytology: Cytomorphologic criteria-Implied risk of malignancy.

    Science.gov (United States)

    Margari, Niki; Pouliakis, Abraham; Anoinos, Dionysios; Terzakis, Emmanouil; Koureas, Nikolaos; Chrelias, Charalampos; Marios Makris, George; Pappas, Assimakis; Bilirakis, Evripidis; Goudeli, Christina; Damaskou, Vasileia; Papantoniou, Nicolaos; Panayiotides, Ioannis; Karakitsos, Petros

    2016-11-01

    There have been various attempts to assess endometrial lesions on cytological material obtained via direct endometrial sampling. The majority of efforts focus on the description of cytological criteria that lead to classification systems resembling histological reporting formats. These systems have low reproducibility, especially in cases of atypical hyperplasia and well differentiated carcinomas. Moreover, they are not linked to the implied risk of malignancy. The material was collected from women examined at the outpatient department of four participating hospitals. We analyzed 866 consecutive, histologically confirmed cases. The sample collection was performed using the EndoGyn device, and processed via Liquid Based Cytology, namely ThinPrep technique. The diagnostic categories and criteria were established by two cytopathologists experienced in endometrial cytology; performance of the proposed reporting format was assessed on the basis of histological outcome; moreover, the implied risk of malignancy was calculated. The proposed six diagnostic categories are as follows: (i) nondiagnostic or unsatisfactory; (ii) without evidence of hyperplasia or malignancy; (iii) atypical cells of endometrium of undetermined significance; (iv) atypical cells of endometrium of low probability for malignancy; (v) atypical cells of endometrium of high probability for malignancy; and (vi) malignant. The risk of malignancy was 1.42% ± 0.98%, 44.44% ± 32.46% (nine cases), 4.30% ± 4.12%, 89.80% ± 8.47%, and 97.81% ± 2.45%, respectively. We propose a clinically oriented classification scheme consisting of diagnostic categories with well determined criteria. Each diagnostic category is linked with an implied risk of malignancy; thus, clinicians may decide on patient management and eventually reduce unnecessary interventional diagnostic procedures. Diagn. Cytopathol. 2016;44:888-901. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Negative masses, even if isolated, imply self-acceleration, hence a catastrophic world

    International Nuclear Information System (INIS)

    Cavalleri, G.; Tonni, E.

    1997-01-01

    The conjecture of the existence of negative masses together with ordinary positive masses leads to runaway motions even if no self-reaction is considered. Pollard and Dunning-Davies have shown other constraints as a modification of the principle of least action and that negative masses can only exist at negative temperature, and must be adiabatically separate from positive masses. They show here that the self-reaction on a single isolated negative mass implies a runaway motion. Consequently, the consideration of self-fields and relevant self-reaction excludes negative masses even if isolated

  5. Large deviations and stochastic volatility with jumps: asymptotic implied volatility for affine models

    OpenAIRE

    Antoine Jacquier; Martin Keller-Ressel; Aleksandar Mijatovic

    2011-01-01

    Let $\\sigma_t(x)$ denote the implied volatility at maturity $t$ for a strike $K=S_0 e^{xt}$, where $x\\in\\bbR$ and $S_0$ is the current value of the underlying. We show that $\\sigma_t(x)$ has a uniform (in $x$) limit as maturity $t$ tends to infinity, given by the formula $\\sigma_\\infty(x)=\\sqrt{2}(h^*(x)^{1/2}+(h^*(x)-x)^{1/2})$, for $x$ in some compact neighbourhood of zero in the class of affine stochastic volatility models. The function $h^*$ is the convex dual of the limiting cumulant gen...

  6. VaR and CVaR Implied in Option Prices

    Directory of Open Access Journals (Sweden)

    Giovanni Barone Adesi

    2016-02-01

    Full Text Available VaR (Value at Risk and CVaR (Conditional Value at Risk are implied by option prices. Their relationships to option prices are derived initially under the pricing measure. It does not require assumptions about the distribution of portfolio returns. The effects of changes of measure are modest at the short horizons typically used in applications. The computation of CVaR from option price is very convenient, because this measure is not elicitable, making direct comparisons of statistical inferences from market data problematic.

  7. Large fault fabric of the Ninetyeast Ridge implies near-spreading ridge formation

    Digital Repository Service at National Institute of Oceanography (India)

    Sager, W.W.; Paul, C.F.; Krishna, K.S.; Pringle, M.S.; Eisin, A.E.; Frey, F.A.; Rao, D.G.; Levchenko, O.V.

    of the high ridge. At 26°S, prominent NE-SW 97 oriented lineations extend southwest from the ridge. Some appear to connect with N-S fracture 98 zone troughs east of NER, implying that the NE-SW features are fracture zone scars formed after 99 the change... to the 105 ridge (Fig. 3). This is especially true for NER south of ~4°S. Where KNOX06RR crossed a 106 gravity lineation, negative gradient features correspond to troughs whereas positive gradient 107 features result from igneous basement highs (Fig. 3...

  8. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    International Nuclear Information System (INIS)

    Gulisashvili, Archil; Stein, Elias M.

    2010-01-01

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  9. Scalar utility theory and proportional processing: What does it actually imply?

    Science.gov (United States)

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2016-09-07

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.

    Science.gov (United States)

    Palmer, Stephen E; Langlois, Thomas A

    2017-07-01

    Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.

  11. Transformations of visual memory induced by implied motions of pattern elements.

    Science.gov (United States)

    Finke, R A; Freyd, J J

    1985-10-01

    Four experiments measured distortions in short-term visual memory induced by displays depicting independent translations of the elements of a pattern. In each experiment, observers saw a sequence of 4 dot patterns and were instructed to remember the third pattern and to compare it with the fourth. The first three patterns depicted translations of the dots in consistent, but separate directions. Error rates and reaction times for rejecting the fourth pattern as different from the third were substantially higher when the dots in that pattern were displaced slightly forward, in the same directions as the implied motions, compared with when the dots were displaced in the opposite, backward directions. These effects showed little variation across interstimulus intervals ranging from 250 to 2,000 ms, and did not depend on whether the displays gave rise to visual apparent motion. However, they were eliminated when the dots in the fourth pattern were displaced by larger amounts in each direction, corresponding to the dot positions in the next and previous patterns in the same inducing sequence. These findings extend our initial report of the phenomenon of "representational momentum" (Freyd & Finke, 1984a), and help to rule out alternatives to the proposal that visual memories tend to undergo, at least to some extent, the transformations implied by a prior sequence of observed events.

  12. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    Science.gov (United States)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  13. Collision-free motion coordination of heterogeneous robots

    International Nuclear Information System (INIS)

    Ko, Nak Yong; Seo, Dong Jin; Simmons, Reid G.

    2008-01-01

    This paper proposes a method to coordinate the motion of multiple heterogeneous robots on a network. The proposed method uses prioritization and avoidance. Priority is assigned to each robot; a robot with lower priority avoids the robots of higher priority. To avoid collision with other robots, elastic force and potential field force are used. Also, the method can be applied separately to the motion planning of a part of a robot from that of the other parts of the robot. This is useful for application to the robots of the type mobile manipulator or highly redundant robots. The method is tested by simulation, and it results in smooth and adaptive coordination in an environment with multiple heterogeneous robots

  14. Collision-free motion coordination of heterogeneous robots

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Nak Yong [Chosun University, Gwangju (Korea, Republic of); Seo, Dong Jin [RedOne Technologies, Gwangju (Korea, Republic of); Simmons, Reid G. [Carnegie Mellon University, Pennsylvania (United States)

    2008-11-15

    This paper proposes a method to coordinate the motion of multiple heterogeneous robots on a network. The proposed method uses prioritization and avoidance. Priority is assigned to each robot; a robot with lower priority avoids the robots of higher priority. To avoid collision with other robots, elastic force and potential field force are used. Also, the method can be applied separately to the motion planning of a part of a robot from that of the other parts of the robot. This is useful for application to the robots of the type mobile manipulator or highly redundant robots. The method is tested by simulation, and it results in smooth and adaptive coordination in an environment with multiple heterogeneous robots

  15. Collision free pick up and movement of large objects

    International Nuclear Information System (INIS)

    Drotning, W.D.; McKee, G.R.

    1998-01-01

    An automated system is described for the sensor-based precision docking and manipulation of large objects. Past work in the remote handling of large nuclear waste containers is extensible to the problems associated with the handling of large objects such as coils of flat steel in industry. Computer vision and ultrasonic proximity sensing as described here are used to control the precision docking of large objects, and swing damped motion control of overhead cranes is used to control the position of the pick up device and suspended payload during movement. Real-time sensor processing and model-based control are used to accurately position payloads

  16. Relative sensory sparing in the diabetic foot implied through vibration testing

    Directory of Open Access Journals (Sweden)

    Todd O'Brien

    2013-09-01

    Full Text Available Background: The dorsal aspect of the hallux is often cited as the anatomic location of choice for vibration testing in the feet of diabetic patients. To validate this preference, vibration tests were performed and compared at the hallux and 5th metatarsal head in diabetic patients with established neuropathy. Methods: Twenty-eight neuropathic, diabetic patients and 17 non-neuropathic, non-diabetic patients underwent timed vibration testing (TVT with a novel 128 Hz electronic tuning fork (ETF at the hallux and 5th metatarsal head. Results: TVT values in the feet of diabetic patients were found to be reduced at both locations compared to controls. Unexpectedly, these values were significantly lower at the hallux (P < 0.001 compared to the 5th metatarsal head. Conclusion: This study confirms the hallux as the most appropriate location for vibration testing and implies relative sensory sparing at the 5th metatarsal head, a finding not previously reported in diabetic patients.

  17. [Discussion on ideological concept implied in traditional reinforcing and reducing method of acupuncture].

    Science.gov (United States)

    Li, Suyun; Zhao, Jingsheng

    2017-11-12

    The forming and development of traditional reinforcing and reducing method of acupuncture was rooted in traditional culture of China, and was based on the ancients' special understanding of nature, life and diseases, therefore its principle and methods were inevitably influenced by philosophy culture and medicine concept at that time. With deep study on Inner Canon of Huangdi and representative reinforcing and reducing method of acupuncture, the implied ideological concept, including contradiction view and profit-loss view in ancient dialectic, yin-yang balance theory, concept of life flow, monophyletic theory of qi , theory of existence of disease-evil, yin - yang astrology theory, theory of inter-promotion of five elements, were summarized and analyzed. The clarified and systematic understanding on guiding ideology of reinforcing and reducing method of acupuncture could significantly promote the understanding on principle, method, content and manipulation.

  18. The chain rule implies Tsirelson's bound: an approach from generalized mutual information

    International Nuclear Information System (INIS)

    Wakakuwa, Eyuri; Murao, Mio

    2012-01-01

    In order to analyze an information theoretical derivation of Tsirelson's bound based on information causality, we introduce a generalized mutual information (GMI), defined as the optimal coding rate of a channel with classical inputs and general probabilistic outputs. In the case where the outputs are quantum, the GMI coincides with the quantum mutual information. In general, the GMI does not necessarily satisfy the chain rule. We prove that Tsirelson's bound can be derived by imposing the chain rule on the GMI. We formulate a principle, which we call the no-supersignaling condition, which states that the assistance of nonlocal correlations does not increase the capability of classical communication. We prove that this condition is equivalent to the no-signaling condition. As a result, we show that Tsirelson's bound is implied by the nonpositivity of the quantitative difference between information causality and no-supersignaling. (paper)

  19. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  20. The dynamic conditional relationship between stock market returns and implied volatility

    Science.gov (United States)

    Park, Sung Y.; Ryu, Doojin; Song, Jeongseok

    2017-09-01

    Using the dynamic conditional correlation multivariate generalized autoregressive conditional heteroskedasticity (DCC-MGARCH) model, we empirically examine the dynamic relationship between stock market returns (KOSPI200 returns) and implied volatility (VKOSPI), as well as their statistical mechanics, in the Korean market, a representative and leading emerging market. We consider four macroeconomic variables (exchange rates, risk-free rates, term spreads, and credit spreads) as potential determinants of the dynamic conditional correlation between returns and volatility. Of these macroeconomic variables, the change in exchange rates has a significant impact on the dynamic correlation between KOSPI200 returns and the VKOSPI, especially during the recent financial crisis. We also find that the risk-free rate has a marginal effect on this dynamic conditional relationship.

  1. Does a massive neutrino imply to go beyond the standard model?

    International Nuclear Information System (INIS)

    Le Diberder, F.; Cohen-Tannoudji, G.; Davier, M.

    2002-01-01

    This article gathers the 15 contributions to this seminar. The purpose of this seminar was to define up to which extend the standard model is challenged by massive neutrinos. A non-zero mass for neutrinos, even a few eV, would solve the problem of the missing mass of the universe, and it would mean no more need for supersymmetry and its neutralinos. A massless neutrino theoretically implies a symmetry and an interaction that are not described by the standard model. In some aspects, it appears that a non-zero mass is natural within the framework of the standard model, and for some scientists the smallness of this value could be the hint of the need for a new physics

  2. What implies the good work for registered nurses in municipal elderly care in Sweden?

    Science.gov (United States)

    Josefsson, Karin; Aling, Jenny; Östin, Britt-Louise

    2011-08-01

    The aim was to describe registered nurses' perceptions of what the good work implies to them in municipal elderly care. A descriptive design and a structured questionnaire specifically designed for this study were used. Sixty housing units for older people and 213 nurses participated, with a response rate of 62%. The good work included the following aspects: intellectually stimulating without guilt feelings; freedom and independence with the possibility to influence; having appreciative and pleasant fellow workers and a fair and understanding manager; a good physical and risk-free environment; work security and a steady income with the possibility of improving salary through work effort; work effort should be beneficial to others; innovative thinking and initiative should be highly valued; and pride in work without compromising personal values. Employers must take this into consideration to retain those nurses already employed and recruit nurses to municipal elderly care.

  3. Optimal Plant Carbon Allocation Implies a Biological Control on Nitrogen Availability

    Science.gov (United States)

    Prentice, I. C.; Stocker, B. D.

    2015-12-01

    The degree to which nitrogen availability limits the terrestrial C sink under rising CO2 is a key uncertainty in carbon cycle and climate change projections. Results from ecosystem manipulation studies and meta-analyses suggest that plant C allocation to roots adjusts dynamically under varying degrees of nitrogen availability and other soil fertility parameters. In addition, the ratio of biomass production to GPP appears to decline under nutrient scarcity. This reflects increasing plant C exudation into the soil (Cex) with decreasing nutrient availability. Cex is consumed by an array of soil organisms and may imply an improvement of nutrient availability to the plant. Thus, N availability is under biological control, but incurs a C cost. In spite of clear observational support, this concept is left unaccounted for in Earth system models. We develop a model for the coupled cycles of C and N in terrestrial ecosystems to explore optimal plant C allocation under rising CO2 and its implications for the ecosystem C balance. The model follows a balanced growth approach, accounting for the trade-offs between leaf versus root growth and Cex in balancing C fixation and N uptake. We assume that Cex is proportional to root mass, and that the ratio of N uptake (Nup) to Cex is proportional to inorganic N concentration in the soil solution. We further assume that Cex is consumed by N2-fixing processes if the ratio of Nup:Cex falls below the inverse of the C cost of N2-fixation. Our analysis thereby accounts for the feedbacks between ecosystem C and N cycling and stoichiometry. We address the question of how the plant C economy will adjust under rising atmospheric CO2 and what this implies for the ecosystem C balance and the degree of N limitation.

  4. Enacted and implied stigma for dementia in a community in south-west Nigeria.

    Science.gov (United States)

    Adebiyi, Akindele O; Fagbola, Motunrayo A; Olakehinde, Olaide; Ogunniyi, Adesola

    2016-07-01

    Dementia is a chronic progressive disease that mostly affects the elderly. There is often a stigma surrounding dementia patients because of poor awareness about the disease. In Nigeria, this stigma and related attitudes have not been fully explored. In this study, we assessed the attitude of people towards demented individuals in a transitional community in Nigeria. The study used a mixed methods approach. Focused group discussions exploring the concept of dementia were conducted among six community groups, and quantitative data was obtained from an interviewer-administered questionnaire. A total of 313 respondents were selected with a cluster sampling technique. Only 212 respondents (67.7%) were aware of dementia. 'Memory loss disease', 'ageing disease', 'disease of insanity', 'brain disorder', 'disease of forgetfulness', and 'dull brain' are the common names used to describe dementia in the community. Enacted stigma was evident as 36% of respondents felt dementia was associated with shame and embarrassment in the community. Implied stigma was evident in another third that opined that demented individuals would prefer not to know or let others know that they have the disease. Also, 28% were of the opinion that people do not take those with dementia seriously. Of the 22 (10.4%) that reported having received structured information about dementia, 16 (72.7%) got the information from health facilities. Qualitative data revealed the presence of enacted stigma in the community as some referred to affected individuals by derogatory names such as 'madman'. Some statements from the focus group discussion participants also gave useful insights into the scorn with which demented individuals are sometimes treated. The presence of enacted and implied stigma related to dementia within the community calls for concern. More research efforts are needed to unravel the burden of stigma within communities and best practice for stigma-reducing interventions. © 2015 The Authors

  5. Explaining the level of credit spreads: Option-implied jump risk premia in a firm value model

    NARCIS (Netherlands)

    Cremers, K.J.M.; Driessen, J.; Maenhout, P.

    2008-01-01

    We study whether option-implied jump risk premia can explain the high observed level of credit spreads. We use a structural jump-diffusion firm value model to assess the level of credit spreads generated by option-implied jump risk premia. Prices and returns of equity index and individual options

  6. 32 CFR 701.120 - Processing requests that cite or imply PA, Freedom of Information (FOIA), or PA/FOIA.

    Science.gov (United States)

    2010-07-01

    ... Privacy Program § 701.120 Processing requests that cite or imply PA, Freedom of Information (FOIA), or PA... maximum release of information allowed under the Acts. (d) Processing time limits. DON activities shall... 32 National Defense 5 2010-07-01 2010-07-01 false Processing requests that cite or imply PA...

  7. 78 FR 13665 - L.E. Bell Construction Company, Inc.; Notice of Termination of Exemption by Implied Surrender and...

    Science.gov (United States)

    2013-02-28

    ... Construction Company, Inc.; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments... initiated by the Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b... reserves the right to revoke an exemption if any term or condition of the exemption is violated. The...

  8. 76 FR 52657 - Herschel L. Webster; Revonda Amthor; Notice of Termination of Exemption by Implied Surrender and...

    Science.gov (United States)

    2011-08-23

    .... Webster; Revonda Amthor; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments... initiated by the Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b... exemption if any term or condition of the exemption is violated. The project has not operated since 1996 and...

  9. The role of implied motion in engaging audiences for health promotion: encouraging naps on a college campus.

    Science.gov (United States)

    Mackert, Michael; Lazard, Allison; Guadagno, Marie; Hughes Wagner, Jessica

    2014-01-01

    Lack of sleep among college students negatively impacts health and academic outcomes. Building on research that implied motion imagery increases brain activity, this project tested visual design strategies to increase viewers' engagement with a health communication campaign promoting napping to improve sleep habits. PARTICIPANTS (N = 194) were recruited from a large southwestern university in October 2012. Utilizing an experimental design, participants were assigned to 1 of 3 conditions: an implied motion superhero spokes-character, a static superhero spokes-character, and a control group. The use of implied motion did not achieve the hypothesized effect on message elaboration, but superheroes are a promising persuasive tool for health promotion campaigns for college audiences. Implications for sleep health promotion campaigns and the role of implied motion in message design strategies are discussed, as well as future directions for research on the depiction of implied motion as it relates to theoretical development.

  10. Exploring the assumed invariance of implied emission factors for forest biomass in greenhouse gas inventories

    International Nuclear Information System (INIS)

    Smith, James E.; Heath, Linda S.

    2010-01-01

    Reviews of each nation's annual greenhouse gas inventory submissions including forestland are part of the ongoing reporting process of the United Nations Framework Convention on Climate Change. Goals of these reviews include improving quality and consistency within and among reports. One method of facilitating comparisons is the use of a standard index such as an implied emission factor (IEF), which for forest biomass indicates net rate of carbon emission or sequestration per area. Guidance on the use of IEFs in reviews is limited, but there is an expectation that values should be relatively constant both over time and across spatial scales. To address this hypothesis, we examine IEFs over time, derived from U.S. forests at plot-, state-, and national-levels. Results show that at increasingly aggregated levels, relative heterogeneity decreases but can still be substantial. A net increase in U.S. whole-forest IEFs over time is consistent with results from temperate forests of nations in the European Community. IEFs are better viewed as a distribution of values rather than one constant value principally because of sensitivities to productivity, disturbance, and land use change, which can all vary considerably across a nation's forest land.

  11. Universal Property of Quantum Gravity implied by Bekenstein-Hawking Entropy and Boltzmann formula

    International Nuclear Information System (INIS)

    Saida, Hiromi

    2013-01-01

    We search for a universal property of quantum gravity. By u niversal , we mean the independence from any existing model of quantum gravity (such as the super string theory, loop quantum gravity, causal dynamical triangulation, and so on). To do so, we try to put the basis of our discussion on theories established by some experiments. Thus, we focus our attention on thermodynamical and statistical-mechanical basis of the black hole thermodynamics: Let us assume that the Bekenstein-Hawking entropy is given by the Boltzmann formula applied to the underlying theory of quantum gravity. Under this assumption, the conditions justifying Boltzmann formula together with uniqueness of Bekenstein-Hawking entropy imply a reasonable universal property of quantum gravity. The universal property indicates a repulsive gravity at Planck length scale, otherwise stationary black holes can not be regarded as thermal equilibrium states of gravity. Further, in semi-classical level, we discuss a possible correction of Einstein equation which generates repulsive gravity at Planck length scale.

  12. Large-scale subduction of continental crust implied by India-Asia mass-balance calculation

    Science.gov (United States)

    Ingalls, Miquela; Rowley, David B.; Currie, Brian; Colman, Albert S.

    2016-11-01

    Continental crust is buoyant compared with its oceanic counterpart and resists subduction into the mantle. When two continents collide, the mass balance for the continental crust is therefore assumed to be maintained. Here we use estimates of pre-collisional crustal thickness and convergence history derived from plate kinematic models to calculate the crustal mass balance in the India-Asia collisional system. Using the current best estimates for the timing of the diachronous onset of collision between India and Eurasia, we find that about 50% of the pre-collisional continental crustal mass cannot be accounted for in the crustal reservoir preserved at Earth's surface today--represented by the mass preserved in the thickened crust that makes up the Himalaya, Tibet and much of adjacent Asia, as well as southeast Asian tectonic escape and exported eroded sediments. This implies large-scale subduction of continental crust during the collision, with a mass equivalent to about 15% of the total oceanic crustal subduction flux since 56 million years ago. We suggest that similar contamination of the mantle by direct input of radiogenic continental crustal materials during past continent-continent collisions is reflected in some ocean crust and ocean island basalt geochemistry. The subduction of continental crust may therefore contribute significantly to the evolution of mantle geochemistry.

  13. Market-implied spread for earthquake CAT bonds: financial implications of engineering decisions.

    Science.gov (United States)

    Damnjanovic, Ivan; Aslan, Zafer; Mander, John

    2010-12-01

    In the event of natural and man-made disasters, owners of large-scale infrastructure facilities (assets) need contingency plans to effectively restore the operations within the acceptable timescales. Traditionally, the insurance sector provides the coverage against potential losses. However, there are many problems associated with this traditional approach to risk transfer including counterparty risk and litigation. Recently, a number of innovative risk mitigation methods, termed alternative risk transfer (ART) methods, have been introduced to address these problems. One of the most important ART methods is catastrophe (CAT) bonds. The objective of this article is to develop an integrative model that links engineering design parameters with financial indicators including spread and bond rating. The developed framework is based on a four-step structural loss model and transformed survival model to determine expected excess returns. We illustrate the framework for a seismically designed bridge using two unique CAT bond contracts. The results show a nonlinear relationship between engineering design parameters and market-implied spread. © 2010 Society for Risk Analysis.

  14. Analysis of model implied volatility for jump diffusion models: Empirical evidence from the Nordpool market

    International Nuclear Information System (INIS)

    Nomikos, Nikos K.; Soldatos, Orestes A.

    2010-01-01

    In this paper we examine the importance of mean reversion and spikes in the stochastic behaviour of the underlying asset when pricing options on power. We propose a model that is flexible in its formulation and captures the stylized features of power prices in a parsimonious way. The main feature of the model is that it incorporates two different speeds of mean reversion to capture the differences in price behaviour between normal and spiky periods. We derive semi-closed form solutions for European option prices using transform analysis and then examine the properties of the implied volatilities that the model generates. We find that the presence of jumps generates prominent volatility skews which depend on the sign of the mean jump size. We also show that mean reversion reduces the volatility smile as time to maturity increases. In addition, mean reversion induces volatility skews particularly for ITM options, even in the absence of jumps. Finally, jump size volatility and jump intensity mainly affect the kurtosis and thus the curvature of the smile with the former having a more important role in making the volatility smile more pronounced and thus increasing the kurtosis of the underlying price distribution.

  15. Low Seismic Attenuation in Southern New England Lithosphere Implies Little Heating by the Upwelling Asthenosphere

    Science.gov (United States)

    Lamoureux, J. M.; Menke, W. H.

    2017-12-01

    The Northern Appalachian Anomaly (NAA) is a patch of the asthenosphere in southern New England that is unusually hot given its passive margin setting. Previous research has detected large seismic wave delays that imply a temperature of 770 deg C higher than the mantle below the adjacent craton at the same depth. A key outstanding issue is whether the NAA interacts with the lithosphere above it (e.g. by heating it up). We study this issue using Po and So waves from two magnitude >5.5 earthquakes near the Puerto Rico Trench. These waves, propagating in the cold oceanic lithosphere at near Moho speeds, deliver high frequency energy to the shallow continental lithosphere. We hypothesized that: (1) once within the continental lithosphere, Po and So experience attenuation with distance that can be quantified by a quality factor Q, and that (2) any heating of the lithosphere above the NAA would lead to a higher Q than in regions further north or south along the continental margin. Corresponding Po and So velocities would also be lower. The decay rates of Po and So are estimated using least-squares applied to RMS coda amplitudes measured from digital seismograms from stations in northeastern North America, corrected for instrument response. A roughly log-linear decrease in amplitude is observed, corresponding to P and S wave quality factors in the range of 394-1500 and 727-6847, respectively. Measurements are made for four margin-perpendicular geographical bands, with one band overlapping the NAA. We detect no effect on these amplitudes by the NAA; 95% confidence bounds overlap in every case; Furthermore, all quality factors are much higher than the 100 predicted by lab experiments for near-solidus mantle rocks. These results suggest that the NAA is not causing significant heating of the lithosphere above it. The shear velocities, however, are about 10% slower above the NAA - an effect that may be fossil, reflecting processes that occurred millions of years ago.

  16. Microstructures imply cataclasis and authigenic mineral formation control geomechanical properties of New Zealand's Alpine Fault

    Science.gov (United States)

    Schuck, B.; Janssen, C.; Schleicher, A. M.; Toy, V. G.; Dresen, G.

    2018-05-01

    The Alpine Fault is capable of generating large (MW > 8) earthquakes and is the main geohazard on South Island, NZ, and late in its 250-291-year seismic cycle. To minimize its hazard potential, it is indispensable to identify and understand the processes influencing the geomechanical behavior and strength-evolution of the fault. High-resolution microstructural, mineralogical and geochemical analyses of the Alpine Fault's core demonstrate wall rock fragmentation, assisted by mineral dissolution, and cementation resulting in the formation of a fine-grained principal slip zone (PSZ). A complex network of anastomosing and mutually cross-cutting calcite veins implies that faulting occurred during episodes of dilation, slip and sealing. Fluid-assisted dilatancy leads to a significant volume increase accommodated by vein formation in the fault core. Undeformed euhedral chlorite crystals and calcite veins that have cut footwall gravels demonstrate that these processes occurred very close to the Earth's surface. Microstructural evidence indicates that cataclastic processes dominate the deformation and we suggest that powder lubrication and grain rolling, particularly influenced by abundant nanoparticles, play a key role in the fault core's velocity-weakening behavior rather than frictional sliding. This is further supported by the absence of smectite, which is reasonable given recently measured geothermal gradients of more than 120 °C km-1 and the impermeable nature of the PSZ, which both limit the growth of this phase and restrict its stability to shallow depths. Our observations demonstrate that high-temperature fluids can influence authigenic mineral formation and thus control the fault's geomechanical behavior and the cyclic evolution of its strength.

  17. Quantifying differences in land use emission estimates implied by definition discrepancies

    Science.gov (United States)

    Stocker, B. D.; Joos, F.

    2015-11-01

    The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.

  18. Large differences in land use emission quantifications implied by definition discrepancies

    Science.gov (United States)

    Stocker, B. D.; Joos, F.

    2015-03-01

    The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review the conceptual differences of eLUC quantification methods and apply an Earth System Model to demonstrate that what is claimed to represent total eLUC differs by up to ~20% when quantified from ESM vs. offline vegetation models. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies and global carbon budget accountings should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.

  19. Object Localization Does Not Imply Awareness of Object Category at the Break of Continuous Flash Suppression

    Directory of Open Access Journals (Sweden)

    Florian Kobylka

    2017-06-01

    Full Text Available In continuous flash suppression (CFS, a dynamic noise masker, presented to one eye, suppresses conscious perception of a test stimulus, presented to the other eye, until the suppressed stimulus comes to awareness after few seconds. But what do we see breaking the dominance of the masker in the transition period? We addressed this question with a dual-task in which observers indicated (i whether the test object was left or right of the fixation mark (localization and (ii whether it was a face or a house (categorization. As done recently Stein et al. (2011a, we used two experimental varieties to rule out confounds with decisional strategy. In the terminated mode, stimulus and masker were presented for distinct durations, and the observers were asked to give both judgments at the end of the trial. In the self-paced mode, presentation lasted until the observers responded. In the self-paced mode, b-CFS durations for object categorization were about half a second longer than for object localization. In the terminated mode, correct categorization rates were consistently lower than correct detection rates, measured at five duration intervals ranging up to 2 s. In both experiments we observed an upright face advantage compared to inverted faces and houses, as concurrently reported in b-CFS studies. Our findings reveal that more time is necessary to enable observers judging the nature of the object, compared to judging that there is “something other” than the noise which can be localized, but not recognized. This suggests gradual transitions in the first break of CFS. Further, the results imply that suppression is such that no cues to object identity are conveyed in potential “leaks” of CFS (Gelbard-Sagiv et al., 2016.

  20. 16 CFR 303.40 - Use of terms in written advertisements that imply presence of a fiber.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Use of terms in written advertisements that imply presence of a fiber. 303.40 Section 303.40 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS UNDER SPECIFIC ACTS OF CONGRESS RULES AND REGULATIONS UNDER THE TEXTILE FIBER PRODUCTS IDENTIFICATION ACT § 303.40 Use of terms in written...

  1. The Role of Implied Volatility in Forecasting Future Realized Volatility and Jumps in Foreign Exchange, Stock and Bond Markets

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper

    We study the forecasting of future realized volatility in the stock, bond, and foreign exchange markets, as well as the continuous sample path and jump components of this, from variables in the information set, including implied volatility backed out from option prices. Recent nonparametric...

  2. The Role of Implied Motion in Engaging Audiences for Health Promotion: Encouraging Naps on a College Campus

    Science.gov (United States)

    Mackert, Michael; Lazard, Allison; Guadagno, Marie; Hughes Wagner, Jessica

    2014-01-01

    Objective: Lack of sleep among college students negatively impacts health and academic outcomes. Building on research that implied motion imagery increases brain activity, this project tested visual design strategies to increase viewers' engagement with a health communication campaign promoting napping to improve sleep habits. Participants:…

  3. Dynamic Estimation of Volatility Risk Premia and Investor Risk Aversion from Option-Implied and Realized Volatilities

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Gibson, Michael; Zhou, Hao

    experiment confirms that the procedure works well in practice. Implementing the procedure with actual S&P500 option-implied volatilities and high-frequency five-minute-based realized volatilities indicates significant temporal dependencies in the estimated stochastic volatility risk premium, which we in turn...

  4. 75 FR 16099 - Mr. Jerry McMillan and Ms. Christine Smith; Notice of Termination of License by Implied Surrender...

    Science.gov (United States)

    2010-03-31

    ... McMillan and Ms. Christine Smith; Notice of Termination of License by Implied Surrender and Soliciting... surrender b. Project No.: P-9907-018 c. Licensees: Mr. Jerry McMillan and Ms. Christine Smith d. Name of... ] 62,282). The project was transferred to Mr. Jerry McMillan and Ms. Christine Smith by order on...

  5. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    Science.gov (United States)

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  6. Revisiting the long memory dynamics of the implied-realized volatility relationship: New evidence from the wavelet regression

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Hlínková, M.

    2016-01-01

    Roč. 54, č. 1 (2016), s. 503-514 ISSN 0264-9993 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : wavelet band spectrum regression * corridor implied volatility * realized volatility * fractional cointegration Subject RIV: AH - Economics Impact factor: 1.481, year: 2016 http://library.utia.cas.cz/separaty/2016/E/barunik-0456186.pdf

  7. The information content of implied volatilities of options on eurodeposit futures traded on the LIFFE: is there long memory?

    OpenAIRE

    Cifarelli, giulio

    2002-01-01

    Under rather general conditions Black - Scholes implied volatilities from at-the-money options appropriately quantify, in each period, the market expectations of the average volatility of the return of the underlying asset until contract expiration. The efficiency of these expectation estimates is investigated here, for options on two major short term interest rate futures contracts traded at the LIFFE, using a long memory framework. Over the 1993 – 1997 time interval the performance of im...

  8. Forecasting Daily Variability of the S and P 100 Stock Index using Historical, Realised and Implied Volatility Measurements

    OpenAIRE

    Koopman, Siem Jan; Jungbacker, Borus; Hol, Eugenie

    2004-01-01

    The increasing availability of financial market data at intraday frequencies has not only led to the development of improved volatility measurements but has also inspired research into their potential value as an information source for volatility forecasting. In this paper we explore the forecasting value of historical volatility (extracted from daily return series), of implied volatility (extracted from option pricing data) and of realised volatility (computed as the sum of squared high freq...

  9. Forecasting the density of oil futures returns using model-free implied volatility and high-frequency data

    International Nuclear Information System (INIS)

    Ielpo, Florian; Sevi, Benoit

    2013-09-01

    Forecasting the density of returns is useful for many purposes in finance, such as risk management activities, portfolio choice or derivative security pricing. Existing methods to forecast the density of returns either use prices of the asset of interest or option prices on this same asset. The latter method needs to convert the risk-neutral estimate of the density into a physical measure, which is computationally cumbersome. In this paper, we take the view of a practitioner who observes the implied volatility under the form of an index, namely the recent OVX, to forecast the density of oil futures returns for horizons going from 1 to 60 days. Using the recent methodology in Maheu and McCurdy (2011) to compute density predictions, we compare the performance of time series models using implied volatility and either daily or intra-daily futures prices. Our results indicate that models based on implied volatility deliver significantly better density forecasts at all horizons, which is in line with numerous studies delivering the same evidence for volatility point forecast. (authors)

  10. Drought-breaking love: An analysis of the moral values implied in ‘Drought’ by Jan Rabie

    OpenAIRE

    C. N. van der Merwe

    1996-01-01

    In this article the tension in 20th century literary theory between absolutism and relativism is discussed. It is argued that, in spite of a movement from absolutism towards relativism, the age-old “absolute” values of truth, beauty and goodness have never been totally forsaken in the creation and the contemplation of literature. In an analysis of “Drought” by Jan Rabie, it is indicated how these values are implied and invoked in Rabie's short story. In conclusion, the fundamental value of lo...

  11. The implied volatility of index options: the evidence of options on the Australian SPI 200 index futures

    OpenAIRE

    Tanha, Hassan

    2017-01-01

    This thesis is a study of the implied volatility component of the Black and Scholes option-pricing model. A recurring finding in the thesis is that in-the-money and out-of ¬the-money options should not be regarded as being on a continuum, but rather as being inherently "different." Additionally, differences across these options are accounted for in relation to behavioural and consumption based models, which, in turn, provide an explanation for the volatility smile. These findings provide a re...

  12. A numerical method to estimate the parameters of the CEV model implied by American option prices: Evidence from NYSE

    International Nuclear Information System (INIS)

    Ballestra, Luca Vincenzo; Cecere, Liliana

    2016-01-01

    Highlights: • We develop a method to compute the parameters of the CEV model implied by American options. • This is the first procedure for calibrating the CEV model to American option prices. • The proposed approach is extensively tested on the NYSE market. • The novel method turns out to be very efficient in computing the CEV model parameters. • The CEV model provides only a marginal improvement over the lognormal model. - Abstract: We develop a highly efficient procedure to forecast the parameters of the constant elasticity of variance (CEV) model implied by American options. In particular, first of all, the American option prices predicted by the CEV model are calculated using an accurate and fast finite difference scheme. Then, the parameters of the CEV model are obtained by minimizing the distance between theoretical and empirical option prices, which yields an optimization problem that is solved using an ad-hoc numerical procedure. The proposed approach, which turns out to be very efficient from the computational standpoint, is used to test the goodness-of-fit of the CEV model in predicting the prices of American options traded on the NYSE. The results obtained reveal that the CEV model does not provide a very good agreement with real market data and yields only a marginal improvement over the more popular Black–Scholes model.

  13. Drought-breaking love: An analysis of the moral values implied in ‘Drought’ by Jan Rabie

    Directory of Open Access Journals (Sweden)

    C. N. van der Merwe

    1996-04-01

    Full Text Available In this article the tension in 20th century literary theory between absolutism and relativism is discussed. It is argued that, in spite of a movement from absolutism towards relativism, the age-old “absolute” values of truth, beauty and goodness have never been totally forsaken in the creation and the contemplation of literature. In an analysis of “Drought” by Jan Rabie, it is indicated how these values are implied and invoked in Rabie's short story. In conclusion, the fundamental value of love or charity is discussed, a value which contains and supersedes the values of truth, beauty and goodness, and reconciles the tension between absolutism and relativism.

  14. Packaging signals in two single-stranded RNA viruses imply a conserved assembly mechanism and geometry of the packaged genome.

    Science.gov (United States)

    Dykeman, Eric C; Stockley, Peter G; Twarock, Reidun

    2013-09-09

    The current paradigm for assembly of single-stranded RNA viruses is based on a mechanism involving non-sequence-specific packaging of genomic RNA driven by electrostatic interactions. Recent experiments, however, provide compelling evidence for sequence specificity in this process both in vitro and in vivo. The existence of multiple RNA packaging signals (PSs) within viral genomes has been proposed, which facilitates assembly by binding coat proteins in such a way that they promote the protein-protein contacts needed to build the capsid. The binding energy from these interactions enables the confinement or compaction of the genomic RNAs. Identifying the nature of such PSs is crucial for a full understanding of assembly, which is an as yet untapped potential drug target for this important class of pathogens. Here, for two related bacterial viruses, we determine the sequences and locations of their PSs using Hamiltonian paths, a concept from graph theory, in combination with bioinformatics and structural studies. Their PSs have a common secondary structure motif but distinct consensus sequences and positions within the respective genomes. Despite these differences, the distributions of PSs in both viruses imply defined conformations for the packaged RNA genomes in contact with the protein shell in the capsid, consistent with a recent asymmetric structure determination of the MS2 virion. The PS distributions identified moreover imply a preferred, evolutionarily conserved assembly pathway with respect to the RNA sequence with potentially profound implications for other single-stranded RNA viruses known to have RNA PSs, including many animal and human pathogens. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Estimating the implied cost of carbon in future scenarios using a CGE model: The Case of Colorado

    International Nuclear Information System (INIS)

    Hannum, Christopher; Cutler, Harvey; Iverson, Terrence; Keyser, David

    2017-01-01

    Using Colorado as a case study, we develop a state-level computable general equilibrium (CGE) model that reflects the roles of coal, natural gas, wind, solar, and hydroelectricity in supplying electricity. We focus on the economic impact of implementing Colorado's existing Renewable Portfolio Standard, updated in 2013. This requires that 25% of state generation come from qualifying renewable sources by 2020. We evaluate the policy under a variety of assumptions regarding wind integration costs and assumptions on the persistence of federal subsidies for wind. Specifically, we estimate the implied price of carbon as the carbon price at which a state-level policy would pass a state-level cost-benefit analysis, taking account of estimated greenhouse gas emission reductions and ancillary benefits from corresponding reductions in criteria pollutants. Our findings suggest that without the Production Tax Credit (federal aid), the state policy of mandating renewable power generation (RPS) is costly to state actors, with an implied cost of carbon of about $17 per ton of CO2 with a 3% discount rate. Federal aid makes the decision between natural gas and wind nearly cost neutral for Colorado. - Highlights: • If variability cost is low and renewables are federally subsidized, RPS is beneficial. • With no PTC or high variability cost, drop in real consumption less than 0.1%. • With PTC phaseout and high variability cost ICP is low, between $7.16 and $11.45. • With no PTC and high variability cost ICP is between $12.66 and $17.34 per ton.

  16. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    Science.gov (United States)

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  17. Health Complaints Associated with Poor Rental Housing Conditions in Arkansas: The Only State without a Landlord's Implied Warranty of Habitability.

    Science.gov (United States)

    Bachelder, Ashley E; Stewart, M Kate; Felix, Holly C; Sealy, Neil

    2016-01-01

    Arkansas is the only U.S. state that does not have a landlord's implied warranty of habitability, meaning tenants have a requirement for maintaining their rental properties at certain habitability standards, but landlords are not legally required to contribute to those minimum health and safety standards. This project assessed the possibility that this lack of landlord responsibility affects tenants' perceived health. Using surveys and interviews, we collected self-reported data on the prevalence and description of problems faced by renters who needed household repairs from their landlords. Of almost 1,000 renters, one-third of them had experienced a problem with their landlord making needed repairs; and one-quarter of those had a health issue they attributed to their housing conditions. Common issues included problems with plumbing, heating, or cooling systems, and pest or rodent control. Reported health problems included elevated stress levels, breathing problems, headaches, high blood pressure, and bites or infections. Hispanic respondents and those with less than a high school education were both significantly more likely to report problems with their landlords not making repairs as requested. These data suggest that the lack of landlord requirements may negatively impact the condition of rental properties and, therefore, may negatively impact the health of Arkansas renters.

  18. Spatially robust estimates of biological nitrogen (N) fixation imply substantial human alteration of the tropical N cycle.

    Science.gov (United States)

    Sullivan, Benjamin W; Smith, W Kolby; Townsend, Alan R; Nasto, Megan K; Reed, Sasha C; Chazdon, Robin L; Cleveland, Cory C

    2014-06-03

    Biological nitrogen fixation (BNF) is the largest natural source of exogenous nitrogen (N) to unmanaged ecosystems and also the primary baseline against which anthropogenic changes to the N cycle are measured. Rates of BNF in tropical rainforest are thought to be among the highest on Earth, but they are notoriously difficult to quantify and are based on little empirical data. We adapted a sampling strategy from community ecology to generate spatial estimates of symbiotic and free-living BNF in secondary and primary forest sites that span a typical range of tropical forest legume abundance. Although total BNF was higher in secondary than primary forest, overall rates were roughly five times lower than previous estimates for the tropical forest biome. We found strong correlations between symbiotic BNF and legume abundance, but we also show that spatially free-living BNF often exceeds symbiotic inputs. Our results suggest that BNF in tropical forest has been overestimated, and our data are consistent with a recent top-down estimate of global BNF that implied but did not measure low tropical BNF rates. Finally, comparing tropical BNF within the historical area of tropical rainforest with current anthropogenic N inputs indicates that humans have already at least doubled reactive N inputs to the tropical forest biome, a far greater change than previously thought. Because N inputs are increasing faster in the tropics than anywhere on Earth, both the proportion and the effects of human N enrichment are likely to grow in the future.

  19. Spatially robust estimates of biological nitrogen (N) fixation imply substantial human alteration of the tropical N cycle

    Science.gov (United States)

    Sullivan, Benjamin W.; Smith, William K.; Townsend, Alan R.; Nasto, Megan K.; Reed, Sasha C.; Chazdon, Robin L.; Cleveland, Cory C.

    2014-01-01

    Biological nitrogen fixation (BNF) is the largest natural source of exogenous nitrogen (N) to unmanaged ecosystems and also the primary baseline against which anthropogenic changes to the N cycle are measured. Rates of BNF in tropical rainforest are thought to be among the highest on Earth, but they are notoriously difficult to quantify and are based on little empirical data. We adapted a sampling strategy from community ecology to generate spatial estimates of symbiotic and free-living BNF in secondary and primary forest sites that span a typical range of tropical forest legume abundance. Although total BNF was higher in secondary than primary forest, overall rates were roughly five times lower than previous estimates for the tropical forest biome. We found strong correlations between symbiotic BNF and legume abundance, but we also show that spatially free-living BNF often exceeds symbiotic inputs. Our results suggest that BNF in tropical forest has been overestimated, and our data are consistent with a recent top-down estimate of global BNF that implied but did not measure low tropical BNF rates. Finally, comparing tropical BNF within the historical area of tropical rainforest with current anthropogenic N inputs indicates that humans have already at least doubled reactive N inputs to the tropical forest biome, a far greater change than previously thought. Because N inputs are increasing faster in the tropics than anywhere on Earth, both the proportion and the effects of human N enrichment are likely to grow in the future.

  20. Linkages among U.S. Treasury Bond Yields, Commodity Futures and Stock Market Implied Volatility: New Nonparametric Evidence

    Directory of Open Access Journals (Sweden)

    Vychytilova Jana

    2015-09-01

    Full Text Available This paper aims to explore specific cross-asset market correlations over the past fifteen- yearperiod-from January 04, 1999 till April 01, 2015, and within four sub-phases covering both the crisis and the non-crisis periods. On the basis of multivariate statistical methods, we focus on investigating relations between selected well-known market indices- U.S. treasury bond yields- the 30-year treasury yield index (TYX and the 10-year treasury yield (TNX; commodity futures the TR/J CRB; and implied volatility of S&P 500 index- the VIX. We estimate relative logarithmic returns by using monthly close prices adjusted for dividends and splits and run normality and correlation analyses. This paper indicates that the TR/J CRB can be adequately modeled by a normal distribution, whereas the rest of benchmarks do not come from a normal distribution. This paper, inter alia, points out some evidence of a statistically significant negative relationship between bond yields and the VIX in the past fifteen years and a statistically significant negative linkage between the TR/J CRB and the VIX since 2009. In rather general terms, this paper thereafter supports the a priori idea- financial markets are interconnected. Such knowledge can be beneficial for building and testing accurate financial market models, and particularly for the understanding and recognizing market cycles.

  1. Corporate Moral Duties: Consequentialism, Collective Moral Agency and the “Ought” Implies “Can” Maxim

    Directory of Open Access Journals (Sweden)

    Leandro Martins Zanitelli

    2013-12-01

    Full Text Available The claim according to which corporations are morally responsible is a controversial one. At the same time, it is nowadays common to assign moral duties to companies, especially in work confronting the business and human rights issue. Can companies bear moral duties without being morally responsible? This article presents three different accounts of the duty to follow the course of action with the best consequences (consequentialist duty. The ascription of that duty to business is compatible with the claim that, by not being volitional agents, companies are not morally responsible for anything they do. The paper also addresses two possible objections against the claim that companies bear the duty of taking the course of action with the best consequences. These objections state that corporations are incapable of acting, be it in a general way (i.e. corporations do not possess the moral status of agents, be it regarding particular acts (the objection grounded on the “ought” implies “can” maxim.

  2. Motor facilitation during observation of implied motion: Evidence for a role of the left dorsolateral prefrontal cortex.

    Science.gov (United States)

    Mineo, Ludovico; Fetterman, Alexander; Concerto, Carmen; Warren, Michael; Infortuna, Carmenrita; Freedberg, David; Chusid, Eileen; Aguglia, Eugenio; Battaglia, Fortunato

    2018-06-01

    The phenomenon of motor resonance (the increase in motor cortex excitability during observation of actions) has been previously described. Transcranial magnetic stimulation (TMS) studies have demonstrated a similar effect during perception of implied motion (IM). The left dorsolateral prefrontal cortex (DLPFC) seems to be activated during action observation. Furthermore, the role of this brain area in motor resonance to IM is yet to be investigated. Fourteen healthy volunteers were enrolled into the study. We used transcranial direct current stimulation (tDCS) to stimulate DLPFC aiming to investigate whether stimulation with different polarities would affect the amplitude of motor evoked potential collected during observation of images with and without IM. The results of our experiment indicated that Cathodal tDCS over the left DLPFC prevented motor resonance during observation of IM. On the contrary, anodal and sham tDCS did not significantly modulate motor resonance to IM. The current study expands the understanding of the neural circuits engaged during observation of IM. Our results are consistent with the hypothesis that action understanding requires the interaction of large networks and that the left DLPFC plays a crucial role in generating motor resonance to IM. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Cone photoreceptor sensitivities and unique hue chromatic responses: correlation and causation imply the physiological basis of unique hues.

    Directory of Open Access Journals (Sweden)

    Ralph W Pridmore

    Full Text Available This paper relates major functions at the start and end of the color vision process. The process starts with three cone photoreceptors transducing light into electrical responses. Cone sensitivities were once expected to be Red Green Blue color matching functions (to mix colors but microspectrometry proved otherwise: they instead peak in yellowish, greenish, and blueish hues. These physiological functions are an enigma, unmatched with any set of psychophysical (behavioral functions. The end-result of the visual process is color sensation, whose essential percepts are unique (or pure hues red, yellow, green, blue. Unique hues cannot be described by other hues, but can describe all other hues, e.g., that hue is reddish-blue. They are carried by four opponent chromatic response curves but the literature does not specify whether each curve represents a range of hues or only one hue (a unique over its wavelength range. Here the latter is demonstrated, confirming that opponent chromatic responses define, and may be termed, unique hue chromatic responses. These psychophysical functions also are an enigma, unmatched with any physiological functions or basis. Here both enigmas are solved by demonstrating the three cone sensitivity curves and the three spectral chromatic response curves are almost identical sets (Pearson correlation coefficients r from 0.95-1.0 in peak wavelengths, curve shapes, math functions, and curve crossover wavelengths, though previously unrecognized due to presentation of curves in different formats, e.g., log, linear. (Red chromatic response curve is largely nonspectral and thus derives from two cones. Close correlation combined with deterministic causation implies cones are the physiological basis of unique hues. This match of three physiological and three psychophysical functions is unique in color vision.

  4. Phylogeography of a Tertiary relict plant, Meconopsis cambrica (Papaveraceae), implies the existence of northern refugia for a temperate herb.

    Science.gov (United States)

    Valtueña, Francisco J; Preston, Chris D; Kadereit, Joachim W

    2012-03-01

    The perennial herb Meconopsis cambrica, a western European endemic, is the only European species of the otherwise Himalayan genus Meconopsis and has been interpreted as a Tertiary relict species. Using rbcL and ITS sequence variation, we date the split between M. cambrica and its sister clade Papaver s.str. to the Middle to Upper Miocene (12.8 Myr, 6.4-19.2 Myr HPD). Within M. cambrica, cpDNA sequence variation reveals the existence of two groups of populations with a comparable level of genetic variation: a northern group from Great Britain, the Massif Central, the western Pyrenees and the Iberian System, and a southern group from the central and eastern Pyrenees. Populations from the Cantabrian Mountains were placed in both groups. Based on ITS sequence variation, the divergence between these two groups can be dated to 1.5 Myr (0.4-2.8 Myr HPD), and the age of the British populations is estimated as 0.37 Myr (0.0-0.9 Myr HPD). Amplified fragment length polymorphism results confirm the distinctive nature of the populations from Britain, the Massif Central and the central and eastern Pyrenees. These patterns of latitudinal variation of M. cambrica differ from patterns of longitudinal differentiation found in many other temperate species and imply glacial survival of the northern populations in northerly refugia. The primary differentiation into northern and southern cpDNA groups dates to near the onset of the Quaternary and suggests that an ancient phylogeographic pattern has survived through several glacial periods. Our data provide evidence that the species has persisted for a long period with a highly fragmented and probably very localized distribution. © 2012 Blackwell Publishing Ltd.

  5. Cone photoreceptor sensitivities and unique hue chromatic responses: correlation and causation imply the physiological basis of unique hues.

    Science.gov (United States)

    Pridmore, Ralph W

    2013-01-01

    This paper relates major functions at the start and end of the color vision process. The process starts with three cone photoreceptors transducing light into electrical responses. Cone sensitivities were once expected to be Red Green Blue color matching functions (to mix colors) but microspectrometry proved otherwise: they instead peak in yellowish, greenish, and blueish hues. These physiological functions are an enigma, unmatched with any set of psychophysical (behavioral) functions. The end-result of the visual process is color sensation, whose essential percepts are unique (or pure) hues red, yellow, green, blue. Unique hues cannot be described by other hues, but can describe all other hues, e.g., that hue is reddish-blue. They are carried by four opponent chromatic response curves but the literature does not specify whether each curve represents a range of hues or only one hue (a unique) over its wavelength range. Here the latter is demonstrated, confirming that opponent chromatic responses define, and may be termed, unique hue chromatic responses. These psychophysical functions also are an enigma, unmatched with any physiological functions or basis. Here both enigmas are solved by demonstrating the three cone sensitivity curves and the three spectral chromatic response curves are almost identical sets (Pearson correlation coefficients r from 0.95-1.0) in peak wavelengths, curve shapes, math functions, and curve crossover wavelengths, though previously unrecognized due to presentation of curves in different formats, e.g., log, linear. (Red chromatic response curve is largely nonspectral and thus derives from two cones.) Close correlation combined with deterministic causation implies cones are the physiological basis of unique hues. This match of three physiological and three psychophysical functions is unique in color vision.

  6. Differences in Radiation Dosimetry and Anorectal Function Testing Imply That Anorectal Symptoms May Arise From Different Anatomic Substrates

    International Nuclear Information System (INIS)

    Smeenk, Robert Jan; Hopman, Wim P.M.; Hoffmann, Aswin L.; Lin, Emile N.J.Th. van; Kaanders, Johannes H.A.M.

    2012-01-01

    . This implies that anal wall and rectal wall should be considered separate organs in radiotherapy planning.

  7. Reduced {sup 123}I-BMIPP uptake implies decreased myocardial flow reserve in patients with chronic stable angina

    Energy Technology Data Exchange (ETDEWEB)

    Kageyama, Hiroyuki; Morita, Koichi; Katoh, Chietsugu; Mabuchi, Megumi; Tamaki, Nagara [Hokkaido University Graduate School of Medicine, Department of Nuclear Medicine, Sapporo (Japan); Tsukamoto, Takahiro; Noriyasu, Kazuyuki; Naya, Masanao [Hokkaido University, Department of Cardiovascular Medicine, Sapporo (Japan); Hokkaido University Graduate School of Medicine, Department of Nuclear Medicine, Sapporo (Japan); Kawai, Yuko [Hokko Memorial Hospital, Department of Cardiovascular Medicine, Sapporo (Japan)

    2006-01-01

    Long-chain fatty acid (LCFA) is the main energy source for normal myocardium at rest, but in ischemic myocardium, the main energy substrate shifts from LCFA to glucose. {sup 123}I-BMIPP is a radiolabeled LCFA analog. In chronic stable angina without previous infarction, we suppose that reduced {sup 123}I-BMIPP uptake is related to the substrate shift in myocardium with decreased myocardial flow reserve (MFR). The purpose of this study was to relate {sup 123}I-BMIPP uptake to rest myocardial blood flow (MBF), hyperemic MBF, and MFR assessed with {sup 15}O-water positron emission tomography (PET). We enrolled 21 patients with chronic stable angina without previous infarction, all of whom underwent {sup 123}I-BMIPP single-photon emission computed tomography (SPECT) and {sup 15}O-water PET. The left ventricle was divided into 13 segments. In each segment, rest MBF and hyperemic MBF were measured by PET. {sup 123}I-BMIPP uptake was evaluated as follows: score 0=normal, 1=slightly decreased uptake, 2=moderately decreased uptake, 3=severely decreased uptake, and 4=complete defect. {sup 123}I-BMIPP uptake was compared with rest MBF, hyperemic MBF, and MFR. The numbers of segments with {sup 123}I-BMIPP scores 0, 1, 2, 3, and 4 were 178, 40, 25, 24, and 0, respectively. The rest MBFs for scores 0, 1, 2, and 3 were 0.93{+-}0.25, 0.86{+-}0.21, 0.97{+-}0.30, and 0.99{+-}0.37 ml/min/g, respectively. The hyperemic MBFs for scores 0, 1, 2, and 3 were 2.76{+-}1.29, 1.84{+-}0.74, 1.37{+-}0.39, and 1.08{+-}0.40 ml/min/g, respectively. The MFRs for scores 0, 1, 2, and 3 were 3.01{+-}1.38, 2.20{+-}0.95, 1.44{+-}0.22, and 1.10{+-}0.26, respectively. As {sup 123}I-BMIPP uptake declined, hyperemic MBF and MFR decreased. In chronic stable angina without previous infarction, reduced {sup 123}I-BMIPP uptake implies decreased MFR. (orig.)

  8. Reduced 123I-BMIPP uptake implies decreased myocardial flow reserve in patients with chronic stable angina.

    Science.gov (United States)

    Kageyama, Hiroyuki; Morita, Koichi; Katoh, Chietsugu; Tsukamoto, Takahiro; Noriyasu, Kazuyuki; Mabuchi, Megumi; Naya, Masanao; Kawai, Yuko; Tamaki, Nagara

    2006-01-01

    Long-chain fatty acid (LCFA) is the main energy source for normal myocardium at rest, but in ischemic myocardium, the main energy substrate shifts from LCFA to glucose. 123I-BMIPP is a radiolabeled LCFA analog. In chronic stable angina without previous infarction, we suppose that reduced 123I-BMIPP uptake is related to the substrate shift in myocardium with decreased myocardial flow reserve (MFR). The purpose of this study was to relate 123I-BMIPP uptake to rest myocardial blood flow (MBF), hyperemic MBF, and MFR assessed with 15O-water positron emission tomography (PET). We enrolled 21 patients with chronic stable angina without previous infarction, all of whom underwent 123I-BMIPP single-photon emission computed tomography (SPECT) and 15O-water PET. The left ventricle was divided into 13 segments. In each segment, rest MBF and hyperemic MBF were measured by PET. 123I-BMIPP uptake was evaluated as follows: score 0=normal, 1=slightly decreased uptake, 2=moderately decreased uptake, 3=severely decreased uptake, and 4=complete defect. 123I-BMIPP uptake was compared with rest MBF, hyperemic MBF, and MFR. The numbers of segments with 123I-BMIPP scores 0, 1, 2, 3, and 4 were 178, 40, 25, 24, and 0, respectively. The rest MBFs for scores 0, 1, 2, and 3 were 0.93+/-0.25, 0.86+/-0.21, 0.97+/-0.30, and 0.99+/-0.37 ml/min/g, respectively. The hyperemic MBFs for scores 0, 1, 2, and 3 were 2.76+/-1.29, 1.84+/-0.74, 1.37+/-0.39, and 1.08+/-0.40 ml/min/g, respectively. The MFRs for scores 0, 1, 2, and 3 were 3.01+/-1.38, 2.20+/-0.95, 1.44+/-0.22, and 1.10+/-0.26, respectively. As 123I-BMIPP uptake declined, hyperemic MBF and MFR decreased. In chronic stable angina without previous infarction, reduced 123I-BMIPP uptake implies decreased MFR.

  9. Reduced 123I-BMIPP uptake implies decreased myocardial flow reserve in patients with chronic stable angina

    International Nuclear Information System (INIS)

    Kageyama, Hiroyuki; Morita, Koichi; Katoh, Chietsugu; Mabuchi, Megumi; Tamaki, Nagara; Tsukamoto, Takahiro; Noriyasu, Kazuyuki; Naya, Masanao; Kawai, Yuko

    2006-01-01

    Long-chain fatty acid (LCFA) is the main energy source for normal myocardium at rest, but in ischemic myocardium, the main energy substrate shifts from LCFA to glucose. 123 I-BMIPP is a radiolabeled LCFA analog. In chronic stable angina without previous infarction, we suppose that reduced 123 I-BMIPP uptake is related to the substrate shift in myocardium with decreased myocardial flow reserve (MFR). The purpose of this study was to relate 123 I-BMIPP uptake to rest myocardial blood flow (MBF), hyperemic MBF, and MFR assessed with 15 O-water positron emission tomography (PET). We enrolled 21 patients with chronic stable angina without previous infarction, all of whom underwent 123 I-BMIPP single-photon emission computed tomography (SPECT) and 15 O-water PET. The left ventricle was divided into 13 segments. In each segment, rest MBF and hyperemic MBF were measured by PET. 123 I-BMIPP uptake was evaluated as follows: score 0=normal, 1=slightly decreased uptake, 2=moderately decreased uptake, 3=severely decreased uptake, and 4=complete defect. 123 I-BMIPP uptake was compared with rest MBF, hyperemic MBF, and MFR. The numbers of segments with 123 I-BMIPP scores 0, 1, 2, 3, and 4 were 178, 40, 25, 24, and 0, respectively. The rest MBFs for scores 0, 1, 2, and 3 were 0.93±0.25, 0.86±0.21, 0.97±0.30, and 0.99±0.37 ml/min/g, respectively. The hyperemic MBFs for scores 0, 1, 2, and 3 were 2.76±1.29, 1.84±0.74, 1.37±0.39, and 1.08±0.40 ml/min/g, respectively. The MFRs for scores 0, 1, 2, and 3 were 3.01±1.38, 2.20±0.95, 1.44±0.22, and 1.10±0.26, respectively. As 123 I-BMIPP uptake declined, hyperemic MBF and MFR decreased. In chronic stable angina without previous infarction, reduced 123 I-BMIPP uptake implies decreased MFR. (orig.)

  10. Biodistribution, pharmacokinetics and uptake ratio of 131I-4-Iodo-phenylacetic acid in normal and tumour implied animals

    International Nuclear Information System (INIS)

    Szuecs, Z.; Sello, T.; Sathekge, M.

    2012-01-01

    per gram of tissue (%ID/g) is given in Table 1 for nine animals as three were considered outliers predominantly due to a high percentage of activity remaining in the tail after injection. The tumour to background ratio was calculated by comparing the muscle on the left flank as opposed to the right flank where the tumour was induced. The labeling via isotopic exchange did yield a low specific activity of the tracer which meant that a substantial amount of 4-iodo-phenylacetic acid was injected. This would amount to 30 μg per rat or 75 μg/kg or 1.1 μg/ml blood (mass of rats 400g and assuming 7% of body weight is blood). The LD50 value for phenyl acetic acid is 1600 mg/kg for intraperitoneal injection in rats which is orders of magnitude higher than the amount injected and therefore one can assume the 0,075 mg/kg injected 4-iodo-phenyl-acetic acid would not adversely interfere with the biological processes in the rat. The amount of phenyl-acetic acid in normal tissue is 16.8 μg/ml which is also a two orders of magnitude higher than the amount of 4-iodophenyl-acetic acid injected. This implies that the biodistribution of the tracer was not influenced by its metabolic product due to equilibrium with phenyl-acetic acid already present in the body. 131 I-4-iodo-phenyl-acetic acid was successfully prepared and the biodistribution in rats recorded. As expected no target organ was found after 5 h (although at the early stages a high cardiac blood pool uptake was recorded) with fast excretion from all organs via the kidney into the urine. In xenograft mice study a 4% tumour uptake and a tumour to background ratio of 2 was recorded after 5 h although high activity levels in the blood still remained at this time point.

  11. Scaling and long-range dependence in option pricing V: Multiscaling hedging and implied volatility smiles under the fractional Black-Scholes model with transaction costs

    Science.gov (United States)

    Wang, Xiao-Tian

    2011-05-01

    This paper deals with the problem of discrete time option pricing using the fractional Black-Scholes model with transaction costs. Through the ‘anchoring and adjustment’ argument in a discrete time setting, a European call option pricing formula is obtained. The minimal price of an option under transaction costs is obtained. In addition, the relation between scaling and implied volatility smiles is discussed.

  12. Time-temperature-burial significance of Devonian anthracite implies former great (approx.6.5 km) depth of burial of Catskill Mountains, New York

    International Nuclear Information System (INIS)

    Friedman, G.M.; Sanders, J.E.

    1982-01-01

    Specimens of coalified plant debris in Tully-correlative strata of the Gilboa Formation (uppermost Middle Devonian) within the eastern Catskill Mountains of New York State have been converted to anthracite having a vitrinite reflectance of 2.5%. This implies a level of organic metamorphism (LOM) of 16. The specimens are about 350 m.y. old; if 200 m.y. is taken as the duration of the time of exposure to the maximum geothermal temperature, then the LOM of 16 and other thermal indicators imply a maximum temperature of 190 0 C. Using a geothermal gradient of 26 0 C.km -1 (17 0 F.1,000 ft -1 ), a former depth of burial of 6.5 km is implied. Such former deep burial is not usually inferred for the Catskills, but it is consistent with the idea that the thick (about 6.4 km or 21,000 ft) Carboniferous strata of northeastern Pennsylvania formerly extended northeast far enough to bury the Catskills. The lack of metamorphism of the Paleozoic strata lying about 4.5 km beneath the Tully-correlative rocks and exposed in the adjacent Hudson Valley places low limits on the former geothermal gradient; this supports the concept of great depth of former burial of the Catskills. For example, 6.5 km of former burial and a geothermal gradient of 26 0 C.km -1 imply a temperature of 307 0 C for the base of the Paleozoic. By contrast, only 1 km of former burial requires a geothermal gradient of 170 0 C.km -1 , which would have subjected the base of the Paleozoic to a temperature of 955 0 GAMMA, which is far higher than the 600 to 650 0 C recently inferred for the Acadian-age metamorphism of the Taconic allochthon in southwestern Massachusetts and adjoining areas

  13. Are Cryptocurrencies the Future of Money? : Whether a Transition to Cryptocurrency, as National Currency of Sweden, Would be Possible and What it Would Imply for the Swedish Society

    OpenAIRE

    Gartz, Madeleine; Linderbrandt, Ida

    2017-01-01

    The underlying technology of cryptocurrencies is a broadly discussed subject. In Sweden, a growing interest for digital assets and payment methods can be observed. The fact that this coincides with an increasing acceptance for cryptocurrencies creates interesting possibilities. Some claim that cryptocurrency could be the future mean of payment. The objective of this report is therefore to examine whether a cryptocurrency could replace the Swedish krona, and what such a transition would imply ...

  14. Vehicle Reference Generator for Collision-Free Trajectories in Hazardous Maneuvers

    Directory of Open Access Journals (Sweden)

    Cuauhtémoc Acosta Lúa

    2018-01-01

    Full Text Available This paper presents a reference generator for ground vehicles, based on potential fields adapted to the case of vehicular dynamics. The reference generator generates signals to be tracked by the vehicle, corresponding to a trajectory avoiding collisions with obstacles. This generator integrates artificial forces of potential fields of the object surrounding the vehicle. The reference generator is used with a controller to ensure the tracking of the accident-free reference. This approach can be used for vehicle autonomous driving or for active control of manned vehicles. Simulation results, presented for the autonomous driving, consider a scenario inspired by the so-called moose (or elk test, with the presence of other collaborative vehicles.

  15. Collision-free inverse kinematics of a 7 link cucumber picking robot

    NARCIS (Netherlands)

    Henten, van E.J.; Schenk, E.J.J.; Willigenburg, van L.G.; Meuleman, J.; Barreiro, P.

    2008-01-01

    The paper presents results of research on inverse kinematics algorithms to be used in a functional model of a cucumber harvesting robot consisting of a redundant manipulator with one prismatic and six rotational joints (P6R). Within a first generic approach, the inverse kinematics problem was

  16. Hash functions and information theoretic security

    DEFF Research Database (Denmark)

    Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid

    2009-01-01

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...

  17. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  18. Less than one implies zero

    NARCIS (Netherlands)

    Schwenninger, Felix L.; Zwart, Hans

    2015-01-01

    In this paper we show that from an estimate of the form supt≥0 C(t) - cos(at)I < 1, we can conclude that C(t) equals cos(at)I. Here (C(t)) t≥0 is a strongly continuous cosine family on a Banach space.

  19. Does classical liberalism imply democracy?

    Directory of Open Access Journals (Sweden)

    David Ellerman

    2015-12-01

    Full Text Available There is a fault line running through classical liberalism as to whether or not democratic self-governance is a necessary part of a liberal social order. The democratic and non-democratic strains of classical liberalism are both present today—particularly in the United States. Many contemporary libertarians and neo-Austrian economists represent the non-democratic strain in their promotion of non-democratic sovereign city-states (start-up cities or charter cities. We will take the late James M. Buchanan as a representative of the democratic strain of classical liberalism. Since the fundamental norm of classical liberalism is consent, we must start with the intellectual history of the voluntary slavery contract, the coverture marriage contract, and the voluntary non-democratic constitution (or pactum subjectionis. Next we recover the theory of inalienable rights that descends from the Reformation doctrine of the inalienability of conscience through the Enlightenment (e.g. Spinoza and Hutcheson in the abolitionist and democratic movements. Consent-based governments divide into those based on the subjects’ alienation of power to a sovereign and those based on the citizens’ delegation of power to representatives. Inalienable rights theory rules out that alienation in favor of delegation, so the citizens remain the ultimate principals and the form of government is democratic. Thus the argument concludes in agreement with Buchanan that the classical liberal endorsement of sovereign individuals acting in the marketplace generalizes to the joint action of individuals as the principals in their own organizations.

  20. A TMS study on the contribution of visual area V5 to the perception of implied motion in art and its appreciation.

    Science.gov (United States)

    Cattaneo, Zaira; Schiavi, Susanna; Silvanto, Juha; Nadal, Marcos

    2017-01-01

    Over the last decade, researchers have sought to understand the brain mechanisms involved in the appreciation of art. Previous studies reported an increased activity in sensory processing regions for artworks that participants find more appealing. Here we investigated the intriguing possibility that activity in cortical area V5-a region in the occipital cortex mediating physical and implied motion detection-is related not only to the generation of a sense of motion from visual cues used in artworks, but also to the appreciation of those artworks. Art-naïve participants viewed a series of paintings and quickly judged whether or not the paintings conveyed a sense of motion, and whether or not they liked them. Triple-pulse TMS applied over V5 while viewing the paintings significantly decreased the perceived sense of motion, and also significantly reduced liking of abstract (but not representational) paintings. Our data demonstrate that V5 is involved in extracting motion information even when the objects whose motion is implied are pictorial representations (as opposed to photographs or film frames), and even in the absence of any figurative content. Moreover, our study suggests that, in the case of untrained people, V5 activity plays a causal role in the appreciation of abstract but not of representational art.

  1. On High-Frequency Topography-Implied Gravity Signals for a Height System Unification Using GOCE-Based Global Geopotential Models

    Science.gov (United States)

    Grombein, Thomas; Seitz, Kurt; Heck, Bernhard

    2017-03-01

    National height reference systems have conventionally been linked to the local mean sea level, observed at individual tide gauges. Due to variations in the sea surface topography, the reference levels of these systems are inconsistent, causing height datum offsets of up to ±1-2 m. For the unification of height systems, a satellite-based method is presented that utilizes global geopotential models (GGMs) derived from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE). In this context, height datum offsets are estimated within a least squares adjustment by comparing the GGM information with measured GNSS/leveling data. While the GNSS/leveling data comprises the full spectral information, GOCE GGMs are restricted to long wavelengths according to the maximum degree of their spherical harmonic representation. To provide accurate height datum offsets, it is indispensable to account for the remaining signal above this maximum degree, known as the omission error of the GGM. Therefore, a combination of the GOCE information with the high-resolution Earth Gravitational Model 2008 (EGM2008) is performed. The main contribution of this paper is to analyze the benefit, when high-frequency topography-implied gravity signals are additionally used to reduce the remaining omission error of EGM2008. In terms of a spectral extension, a new method is proposed that does not rely on an assumed spectral consistency of topographic heights and implied gravity as is the case for the residual terrain modeling (RTM) technique. In the first step of this new approach, gravity forward modeling based on tesseroid mass bodies is performed according to the Rock-Water-Ice (RWI) approach. In a second step, the resulting full spectral RWI-based topographic potential values are reduced by the effect of the topographic gravity field model RWI_TOPO_2015, thus, removing the long to medium wavelengths. By using the latest GOCE GGMs, the impact of topography-implied

  2. A new T2 lesion in a patient with the clinically isolated syndrome does not necessarily imply a conversion to multiple sclerosis.

    Science.gov (United States)

    Capone, Fioravante; Puthenparampil, Marco; Mallio, Carlo Augusto; Celia, Alessandra Ida; Florio, Lucia; Gallo, Paolo; Di Lazzaro, Vincenzo

    2018-01-01

    In the follow-up of patients with the clinically isolated syndrome, both clinical and MRI findings should be carefully evaluated by clinicians to avoid misinterpretation and inappropriate diagnosis of multiple sclerosis. We describe a case of a patient with a previous diagnosis of clinically isolated syndrome who developed a new asymptomatic brain lesion at the MRI follow-up. The careful evaluation of clinical history and radiological findings allowed the correct diagnosis of cocaine-associated ischemic stroke. Our case highlights that, in patients with the clinically isolated syndrome, the appearance of a new lesion on MRI does not necessarily imply a conversion to multiple sclerosis. Among "better explanations", ischemic lesions are of relevance and, in patients without typical risk factors for stroke, rarer causes such as cocaine assumption should be considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Systems with Many Degrees of Freedom: from Mean - Theories of Non-Fermi Liquid Behavior in Impurity Models to Implied Binomial Trees for Modeling Financial Markets

    Science.gov (United States)

    Barle, Stanko

    In this dissertation, two dynamical systems with many degrees of freedom are analyzed. One is the system of highly correlated electrons in the two-impurity Kondo problem. The other deals with building a realistic model of diffusion underlying financial markets. The simplest mean-field theory capable of mimicking the non-Fermi liquid behavior of the critical point in the two-impurity Kondo problem is presented. In this approach Landau's adiabaticity assumption--of a one-to-one correspondence between the low-energy excitations of the interacting and noninteracting systems--is violated through the presence of decoupled local degrees of freedom. These do not couple directly to external fields but appear indirectly in the physical properties leading, for example, to the log(T, omega) behavior of the staggered magnetic susceptibility. Also, as observed previously, the correlation function = -1/4 is a consequence of the equal weights of the singlet and triplet impurity configurations at the critical point. In the second problem, a numerical model is developed to describe the diffusion of prices in the market. Implied binomial (or multinomial) trees are constructed to enable practical pricing of derivative securities in consistency with the existing market. The method developed here is capable of accounting for both the strike price and term structure of the implied volatility. It includes the correct treatment of interest rate and dividends which proves robust even if these quantities are unusually large. The method is explained both as a set of individual innovations and, from a different prospective, as a consequence of a single plausible transformation from the tree of spot prices to the tree of futures prices.

  4. The accretion of solar material onto white dwarfs: No mixing with core material implies that the mass of the white dwarf is increasing

    Directory of Open Access Journals (Sweden)

    Sumner Starrfield

    2014-02-01

    Full Text Available Cataclysmic Variables (CVs are close binary star systems with one component a white dwarf (WD and the other a larger cooler star that fills its Roche Lobe. The cooler star is losing mass through the inner Lagrangian point of the binary and some unknown fraction of this material is accreted by the WD. One consequence of the WDs accreting material, is the possibility that they are growing in mass and will eventually reach the Chandrasekhar Limit. This evolution could result in a Supernova Ia (SN Ia explosion and is designated the Single Degenerate Progenitor (SD scenario. This paper is concerned with the SD scenario for SN Ia progenitors. One problem with the single degenerate scenario is that it is generally assumed that the accreting material mixes with WD core material at some time during the accretion phase of evolution and, since the typical WD has a carbon-oxygen CO core, the mixing results in large amounts of carbon and oxygen being brought up into the accreted layers. The presence of enriched carbon causes enhanced nuclear fusion and a Classical Nova explosion. Both observations and theoretical studies of these explosions imply that more mass is ejected than is accreted. Thus, the WD in a Classical Nova system is losing mass and cannot be a SN Ia progenitor. However, the composition in the nuclear burning region is important and, in new calculations reported here, the consequences to the WD of no mixing of accreted material with core material have been investigated so that the material involved in the explosion has only a Solar composition. WDs with a large range in initial masses and mass accretion rates have been evolved. I find that once sufficient material has been accreted, nuclear burning occurs in all evolutionary sequences and continues until a thermonuclear runaway (TNR occurs and the WD either ejects a small amount of material or its radius grows to about 1012 cm and the evolution is ended. In all cases where mass ejection occurs

  5. Dynamical critical scaling of electric field fluctuations in the greater cusp and magnetotail implied by HF radar observations of F-region Doppler velocity

    Directory of Open Access Journals (Sweden)

    M. L. Parkinson

    2006-03-01

    Full Text Available Akasofu's solar wind ε parameter describes the coupling of solar wind energy to the magnetosphere and ionosphere. Analysis of fluctuations in ε using model independent scaling techniques including the peaks of probability density functions (PDFs and generalised structure function (GSF analysis show the fluctuations were self-affine (mono-fractal, single exponent scaling over 9 octaves of time scale from ~46 s to ~9.1 h. However, the peak scaling exponent α0 was a function of the fluctuation bin size, so caution is required when comparing the exponents for different data sets sampled in different ways. The same generic scaling techniques revealed the organisation and functional form of concurrent fluctuations in azimuthal magnetospheric electric fields implied by SuperDARN HF radar measurements of line-of-sight Doppler velocity, vLOS, made in the high-latitude austral ionosphere. The PDFs of vLOS fluctuation were calculated for time scales between 1 min and 256 min, and were sorted into noon sector results obtained with the Halley radar, and midnight sector results obtained with the TIGER radar. The PDFs were further sorted according to the orientation of the interplanetary magnetic field, as well as ionospheric regions of high and low Doppler spectral width. High spectral widths tend to occur at higher latitude, mostly on open field lines but also on closed field lines just equatorward of the open-closed boundary, whereas low spectral widths are concentrated on closed field lines deeper inside the magnetosphere. The vLOS fluctuations were most self-affine (i.e. like the solar wind ε parameter on the high spectral width field lines in the noon sector ionosphere (i.e. the greater cusp, but suggested multi-fractal behaviour on closed field lines in the midnight sector (i.e. the central plasma sheet. Long tails in the PDFs imply that "microbursts" in ionospheric convection

  6. Dynamical critical scaling of electric field fluctuations in the greater cusp and magnetotail implied by HF radar observations of F-region Doppler velocity

    Directory of Open Access Journals (Sweden)

    M. L. Parkinson

    2006-03-01

    Full Text Available Akasofu's solar wind ε parameter describes the coupling of solar wind energy to the magnetosphere and ionosphere. Analysis of fluctuations in ε using model independent scaling techniques including the peaks of probability density functions (PDFs and generalised structure function (GSF analysis show the fluctuations were self-affine (mono-fractal, single exponent scaling over 9 octaves of time scale from ~46 s to ~9.1 h. However, the peak scaling exponent α0 was a function of the fluctuation bin size, so caution is required when comparing the exponents for different data sets sampled in different ways. The same generic scaling techniques revealed the organisation and functional form of concurrent fluctuations in azimuthal magnetospheric electric fields implied by SuperDARN HF radar measurements of line-of-sight Doppler velocity, vLOS, made in the high-latitude austral ionosphere. The PDFs of vLOS fluctuation were calculated for time scales between 1 min and 256 min, and were sorted into noon sector results obtained with the Halley radar, and midnight sector results obtained with the TIGER radar. The PDFs were further sorted according to the orientation of the interplanetary magnetic field, as well as ionospheric regions of high and low Doppler spectral width. High spectral widths tend to occur at higher latitude, mostly on open field lines but also on closed field lines just equatorward of the open-closed boundary, whereas low spectral widths are concentrated on closed field lines deeper inside the magnetosphere. The vLOS fluctuations were most self-affine (i.e. like the solar wind ε parameter on the high spectral width field lines in the noon sector ionosphere (i.e. the greater cusp, but suggested multi-fractal behaviour on closed field lines in the midnight sector (i.e. the central plasma sheet. Long tails in the PDFs imply that "microbursts" in ionospheric convection occur far more frequently, especially on open field lines, than can be

  7. The spatiotemporal dynamic analysis of the implied market information and characteristics of the correlation coefficient matrix of the international crude oil price returns

    International Nuclear Information System (INIS)

    Tian, Lixin; Ding, Zhenqi; Zhen, Zaili; Wang, Minggang

    2016-01-01

    The international crude oil market plays a crucial role in economies, and the studies of the correlation, risk and synchronization of the international crude oil market have important implications for the security and stability of the country, avoidance of business risk and people's daily lives. We investigate the information and characteristics of the international crude oil market (1999-2015) based on the random matrix theory (RMT). Firstly, we identify richer information in the largest eigenvalues deviating from RMT predictions for the international crude oil market; the international crude oil market can be roughly divided into ten different periods by the methods of eigenvectors and characteristic combination, and the implied market information of the correlation coefficient matrix is advanced. Secondly, we study the characteristics of the international crude oil market by the methods of system risk entropy, dynamic synchronous ratio, dynamic non-synchronous ratio and dynamic clustering algorithm. The results show that the international crude oil market is full of risk. The synchronization of the international crude oil market is very strong, and WTI and Brent occupy a very important position in the international crude oil market. (orig.)

  8. Health Complaints Associated with Poor Rental Housing Conditions in Arkansas: The Only State without a Landlord’s Implied Warranty of Habitability

    Science.gov (United States)

    Bachelder, Ashley E.; Stewart, M. Kate; Felix, Holly C.; Sealy, Neil

    2016-01-01

    Arkansas is the only U.S. state that does not have a landlord’s implied warranty of habitability, meaning tenants have a requirement for maintaining their rental properties at certain habitability standards, but landlords are not legally required to contribute to those minimum health and safety standards. This project assessed the possibility that this lack of landlord responsibility affects tenants’ perceived health. Using surveys and interviews, we collected self-reported data on the prevalence and description of problems faced by renters who needed household repairs from their landlords. Of almost 1,000 renters, one-third of them had experienced a problem with their landlord making needed repairs; and one-quarter of those had a health issue they attributed to their housing conditions. Common issues included problems with plumbing, heating, or cooling systems, and pest or rodent control. Reported health problems included elevated stress levels, breathing problems, headaches, high blood pressure, and bites or infections. Hispanic respondents and those with less than a high school education were both significantly more likely to report problems with their landlords not making repairs as requested. These data suggest that the lack of landlord requirements may negatively impact the condition of rental properties and, therefore, may negatively impact the health of Arkansas renters. PMID:27933288

  9. Health Complaints Associated with Poor Rental Housing Conditions in Arkansas: The Only State Without a Landlord’s Implied Warranty of Habitability

    Directory of Open Access Journals (Sweden)

    Ashley Bachelder

    2016-11-01

    Full Text Available Arkansas is the only U.S. state that does not have a landlord’s implied warranty of habitability, meaning tenants have a requirement for maintaining their rental properties at certain habitability standards, but landlords are not legally required to contribute to those minimum health and safety standards. This project assessed the possibility that this lack of landlord responsibility affects tenants’ perceived health. Using surveys and interviews, we collected self-reported data on the prevalence and description of problems faced by renters who needed household repairs from their landlords. Of almost 1000 renters, one third of them had experienced a problem with their landlord making needed repairs; and one-quarter of those had a health issue they attributed to their housing conditions. Common issues included problems with plumbing, heating or cooling systems, and pest or rodent control. Reported health problems included elevated stress levels, breathing problems, headaches, high blood pressure and bites or infections. Hispanic respondents and those with less than a high school education were both significantly more likely to report problems with their landlords not making repairs as requested. The data suggest that the lack of landlord requirements may negatively impact the condition of rental properties, and therefore may negatively impact the health of Arkansas renters.

  10. High temporal and spatial diversity in marine RNA viruses implies that they have an important role in mortality and structuring plankton communities

    Directory of Open Access Journals (Sweden)

    Julia Anne Gustavsen

    2014-12-01

    Full Text Available Viruses in the order Picornavirales infect eukaryotes, and are widely distributed in coastal waters. Amplicon deep-sequencing of the RNA dependent RNA polymerase (RdRp revealed diverse and highly uneven communities of picorna-like viruses in the coastal waters of British Columbia (B.C., Canada. Almost 300 000 pyrosequence reads revealed 145 operational taxonomic units (OTUs based on 95% sequence similarity at the amino-acid level. Each sample had between 24 and 71 OTUs and there was little overlap among samples. Phylogenetic analysis revealed that some clades of OTUs were only found at one site; whereas, other groups included OTUs from all sites. Since most of these OTUs are likely from viruses that infect eukaryotic phytoplankton, and viral isolates infecting phytoplankton are strain-specific; each OTU probably arose from the lysis of a specific phytoplankton taxon. Moreover, the patchiness in OTU distribution, and the high turnover of viruses in the mixed layer, implies continuous infection and lysis by RNA viruses of a diverse array of eukaryotic phytoplankton taxa. Hence, these viruses are likely important elements structuring the phytoplankton community, and play a significant role in nutrient cycling and energy transfer.

  11. The spatiotemporal dynamic analysis of the implied market information and characteristics of the correlation coefficient matrix of the international crude oil price returns

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Lixin [Jiangsu University, Energy Development and Environmental Protection Strategy Research Center, Zhenjiang, Jiangsu (China); Nanjing Normal University, School of Mathematical Sciences, Nanjing, Jiangsu (China); Ding, Zhenqi; Zhen, Zaili [Jiangsu University, Energy Development and Environmental Protection Strategy Research Center, Zhenjiang, Jiangsu (China); Wang, Minggang [Nanjing Normal University, School of Mathematical Sciences, Nanjing, Jiangsu (China)

    2016-08-15

    The international crude oil market plays a crucial role in economies, and the studies of the correlation, risk and synchronization of the international crude oil market have important implications for the security and stability of the country, avoidance of business risk and people's daily lives. We investigate the information and characteristics of the international crude oil market (1999-2015) based on the random matrix theory (RMT). Firstly, we identify richer information in the largest eigenvalues deviating from RMT predictions for the international crude oil market; the international crude oil market can be roughly divided into ten different periods by the methods of eigenvectors and characteristic combination, and the implied market information of the correlation coefficient matrix is advanced. Secondly, we study the characteristics of the international crude oil market by the methods of system risk entropy, dynamic synchronous ratio, dynamic non-synchronous ratio and dynamic clustering algorithm. The results show that the international crude oil market is full of risk. The synchronization of the international crude oil market is very strong, and WTI and Brent occupy a very important position in the international crude oil market. (orig.)

  12. Contributions of China’s Wood-Based Panels to CO2 Emission and Removal Implied by the Energy Consumption Standards

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-07-01

    Full Text Available Life cycle analysis on wood-based panels in terms of CO2 flux can be used to quantitatively assess the climate change contributions of these materials. In this study, the annual CO2 flux between 1990 and 2015 was calculated through gate-to-gate life cycle analysis of wood-based panels. As implied by the energy consumption standards, China’s wood-based panels used to be carbon sources during the period 1990–2007, with the average contribution to CO2 emissions of 9.20 Mt/year. The implementation of new standards and the development of Cleaner production technologies in China, decreased the energy consumption per panel. China’s wood-based panels acted as a carbon sink between 2008 and 2015, with the average contribution to CO2 removal of 31.71 Mt/year. Plywood produced the largest contributions to the emission and removal of CO2, and was followed by fiberboard and particleboard. China’s wood-based panels, with good prospects and strong demands projected in the future, can potentially contribute to climate change mitigation.

  13. Analysis of Knickzones over a Coastal Mountain Range of the Korean Peninsula Implies Intensive Uplifts during the Opening of the East Sea

    Science.gov (United States)

    Byun, J.; Paik, K.

    2017-12-01

    coastal mountain range had not persisted over the Pliocene, and instead the coastal mountain range had developed mostly during the opening of the East Sea, implying that the formation of the coastal mountain range is mainly attributed to the drifting of the Japanese Islands from the Korean Peninsula and consequent opening of the East Sea.

  14. Biomimetically grown apatite spheres from aggregated bioglass nanoparticles with ultrahigh porosity and surface area imply potential drug delivery and cell engineering applications.

    Science.gov (United States)

    El-Fiqi, Ahmed; Buitrago, Jennifer O; Yang, Sung Hee; Kim, Hae-Won

    2017-09-15

    Here we communicate the generation of biomimetically grown apatite spheres from aggregated bioglass nanoparticles and the potential properties applicable for drug delivery and cell/tissue engineering. Ion releasing nanoparticulates of bioglass (85%SiO 2 -15%CaO) in a mineralizing medium show an intriguing dynamic phenomenon - aggregation, mineralization to apatite, integration and growth into micron-sized (1.5-3μm) spheres. During the progressive ionic dissolution/precipitation reactions, nano-to-micro-morphology, glass-to-crystal composition, and the physico-chemical properties (porosity, surface area, and charge) change dynamically. With increasing reaction period, the apatite becomes more crystallized with increased crystallinity and crystal size, and gets a composition closer to the stoichiometry. The developed microspheres exhibit hierarchical surface nanostructure, negative charge (ς-potential of -20mV), and ultrahigh mesoporosity (mesopore size of 6.1nm, and the resultant surface area of 63.7m 2 /g and pore volume of 0.153cm 3 /g) at 14days of mineralization, which are even higher than those of its precursor bioglass nanoparticles. Thanks to these properties, the biomimetic mineral microspheres take up biological molecules effectively, i.e., loading capacity of positive-charged protein is over 10%. Of note, the release is highly sustainable at a constant rate, i.e., profiling almost 'zero-order' kinetics for 4weeks, suggesting the potential usefulness as protein delivery systems. The biomimetic mineral microspheres hold some remnant Si in the core region, and release calcium, phosphate, and silicate ions over the test period, implying the long-term ionic-related therapeutic functions. The mesenchymal stem cells favour the biomimetic spheres with an excellent viability. Due to the merit of sizes (a few micrometers), the spheres can be intercalated into cells, mediating cellular interactions in 3D cell-spheroid engineering, and also can stimulate osteogenic

  15. Collision-free inverse kinematics of the redundant seven link manipulator used in a cucumber harvesting robot

    NARCIS (Netherlands)

    Henten, van E.J.; Schenk, E.J.J.; Willigenburg, van L.G.; Meuleman, J.; Barreiro, P.

    2010-01-01

    The paper presents results of research on an inverse kinematics algorithm that has been used in a functional model of a cucumber-harvesting robot consisting of a redundant P6R manipulator. Within a first generic approach, the inverse kinematics problem was reformulated as a non-linear programming

  16. Study and treatment of situations implying radon

    International Nuclear Information System (INIS)

    Robe, M.Ch.

    2005-01-01

    The radon is a radioactive gas with a natural origin. It comes from a disintegration of uranium and radium present in the soils. It comes from granitic and volcanic subsoils. The radon can accumulate in buildings. It is the principal source of natural exposure and the second one after medical exposures. It is the only one source of radiations on which man is susceptible to act. Ventilation and airtightness are solutions to reduce radon concentration. (N.C.)

  17. Meteorite Dichotomy Implies that Jupiter Formed Early

    Science.gov (United States)

    Kruijer, T. S.; Burkhardt, C.; Budde, G.; Kleine, T.

    2018-05-01

    Meteorites derive from two distinct nebular reservoirs that co-existed and remained spatially separated between 1 and 3–4 Ma after CAIs. This can most easily be explained if Jupiter acted as a barrier and formed early, within less than 1 Ma.

  18. Index options : Pricing, implied densities and returns

    NARCIS (Netherlands)

    Boes, M.J.

    2006-01-01

    Chapter 2 gives an overview of the literature that is directly related to the topics studied in this thesis. In Chapter 3 the impact of overnight periods on option prices is examined by estimating an option pricing model that takes overnight closures of exchanges explicitly into account. Chapter 4

  19. Endogenous population growth may imply chaos.

    Science.gov (United States)

    Prskawetz, A; Feichtinger, G

    1995-01-01

    The authors consider a discrete-time neoclassical growth model with an endogenous rate of population growth. The resulting one-dimensional map for the capital intensity has a tilted z-shape. Using the theory of nonlinear dynamical systems, they obtain numerical results on the qualitative behavior of time paths for changing parameter values. Besides stable and periodic solutions, erratic time paths may result. In particular, myopic and far-sighted economies--assumed to be characterized by low and high savings rate respectively--are characterized by stable per capita capital stocks, while solutions with chaotic windows exist between these two extremes.

  20. Climate change: believing and seeing implies adapting.

    Science.gov (United States)

    Blennow, Kristina; Persson, Johannes; Tomé, Margarida; Hanewinkel, Marc

    2012-01-01

    Knowledge of factors that trigger human response to climate change is crucial for effective climate change policy communication. Climate change has been claimed to have low salience as a risk issue because it cannot be directly experienced. Still, personal factors such as strength of belief in local effects of climate change have been shown to correlate strongly with responses to climate change and there is a growing literature on the hypothesis that personal experience of climate change (and/or its effects) explains responses to climate change. Here we provide, using survey data from 845 private forest owners operating in a wide range of bio-climatic as well as economic-social-political structures in a latitudinal gradient across Europe, the first evidence that the personal strength of belief and perception of local effects of climate change, highly significantly explain human responses to climate change. A logistic regression model was fitted to the two variables, estimating expected probabilities ranging from 0.07 (SD ± 0.01) to 0.81 (SD ± 0.03) for self-reported adaptive measures taken. Adding socio-demographic variables improved the fit, estimating expected probabilities ranging from 0.022 (SD ± 0.008) to 0.91 (SD ± 0.02). We conclude that to explain and predict adaptation to climate change, the combination of personal experience and belief must be considered.

  1. Climate change: believing and seeing implies adapting.

    Directory of Open Access Journals (Sweden)

    Kristina Blennow

    Full Text Available Knowledge of factors that trigger human response to climate change is crucial for effective climate change policy communication. Climate change has been claimed to have low salience as a risk issue because it cannot be directly experienced. Still, personal factors such as strength of belief in local effects of climate change have been shown to correlate strongly with responses to climate change and there is a growing literature on the hypothesis that personal experience of climate change (and/or its effects explains responses to climate change. Here we provide, using survey data from 845 private forest owners operating in a wide range of bio-climatic as well as economic-social-political structures in a latitudinal gradient across Europe, the first evidence that the personal strength of belief and perception of local effects of climate change, highly significantly explain human responses to climate change. A logistic regression model was fitted to the two variables, estimating expected probabilities ranging from 0.07 (SD ± 0.01 to 0.81 (SD ± 0.03 for self-reported adaptive measures taken. Adding socio-demographic variables improved the fit, estimating expected probabilities ranging from 0.022 (SD ± 0.008 to 0.91 (SD ± 0.02. We conclude that to explain and predict adaptation to climate change, the combination of personal experience and belief must be considered.

  2. Handicap principle implies emergence of dimorphic ornaments.

    Science.gov (United States)

    Clifton, Sara M; Braun, Rosemary I; Abrams, Daniel M

    2016-11-30

    Species spanning the animal kingdom have evolved extravagant and costly ornaments to attract mating partners. Zahavi's handicap principle offers an elegant explanation for this: ornaments signal individual quality, and must be costly to ensure honest signalling, making mate selection more efficient. Here, we incorporate the assumptions of the handicap principle into a mathematical model and show that they are sufficient to explain the heretofore puzzling observation of bimodally distributed ornament sizes in a variety of species. © 2016 The Author(s).

  3. Anesthesiological ethics: can informed consent be implied?

    Science.gov (United States)

    Spike, Jeffrey R

    2012-01-01

    Surgical ethics is a well-recognized field in clinical ethics, distinct from medical ethics. It includes at least a dozen important issues common to surgery that do not exist in internal medicine simply because of the differences in their practices. But until now there has been a tendency to include ethical issues of anesthesiology as a part of surgical ethics. This may mask the importance of ethical issues in anesthesiology, and even help perpetuate an unfortunate view that surgeons are "captain of the ship" in the operating theater (leaving anesthesiologists in a subservient role). We will have a better ethical understanding if we see surgery and anesthesia as two equal partners, ethically as well as in terms of patient care. Informed consent is one such issue, but it is not limited to that. Even on the topic of what type of anesthesia to use, anesthesiologists have often felt subsumed to the surgeon's preferences. This commentary takes the case study and uses it as a exemplar for this very claim: it is time to give due recognition for a new field in clinical ethics, ethics in anesthesia.

  4. Applied and implied semantics in crystallographic publishing

    Directory of Open Access Journals (Sweden)

    McMahon Brian

    2012-08-01

    Full Text Available Abstract Background Crystallography is a data-rich, software-intensive scientific discipline with a community that has undertaken direct responsibility for publishing its own scientific journals. That community has worked actively to develop information exchange standards allowing readers of structure reports to access directly, and interact with, the scientific content of the articles. Results Structure reports submitted to some journals of the International Union of Crystallography (IUCr can be automatically validated and published through an efficient and cost-effective workflow. Readers can view and interact with the structures in three-dimensional visualization applications, and can access the experimental data should they wish to perform their own independent structure solution and refinement. The journals also layer on top of this facility a number of automated annotations and interpretations to add further scientific value. Conclusions The benefits of semantically rich information exchange standards have revolutionised the scholarly publishing process for crystallography, and establish a model relevant to many other physical science disciplines.

  5. Devaney chaos plus shadowing implies distributional chaos.

    Science.gov (United States)

    Li, Jian; Li, Jie; Tu, Siming

    2016-09-01

    We explore connections among the regional proximal relation, the asymptotic relation, and the distal relation for a topological dynamical system with the shadowing property and show that if a Devaney chaotic system has the shadowing property then it is distributionally chaotic.

  6. Does staff diversity imply openness to diversity?

    DEFF Research Database (Denmark)

    Lauring, Jakob; Selmer, Jan

    2013-01-01

    Purpose – Post-secondary educational organizations are currently some of the most diverse settings to be found. However, few educational studies have dealt with staff diversity and hardly any has looked outside the USA. The purpose of this paper is to present a study of members of international...... university departments in Denmark. The authors set out to investigate the relationship between different types of staff diversity and openness to diversity in terms of linguistic, visible, value, and informational heterogeneity. Design/methodology/approach – This study uses responses from 489 staff members......, was unrelated or negatively associated with positive diversity attitudes. Originality/value – Few studies deal with the role of staff diversity and no prior studies the authors know of have examined the link between diversity types and openness to diversity....

  7. A Hull and White Formula for a General Stochastic Volatility Jump-Diffusion Model with Applications to the Study of the Short-Time Behavior of the Implied Volatility

    Directory of Open Access Journals (Sweden)

    Elisa Alòs

    2008-01-01

    Full Text Available We obtain a Hull and White type formula for a general jump-diffusion stochastic volatility model, where the involved stochastic volatility process is correlated not only with the Brownian motion driving the asset price but also with the asset price jumps. Towards this end, we establish an anticipative Itô's formula, using Malliavin calculus techniques for Lévy processes on the canonical space. As an application, we show that the dependence of the volatility process on the asset price jumps has no effect on the short-time behavior of the at-the-money implied volatility skew.

  8. E Learner Face Identification with Local Sensitive Hashing

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... network transmission capacity to send the feature over the network, and to be able to index ... rate of diffusion and energy expertise, provide an innovative methodology for wireless multimedia communications that joins digital ...

  9. Efficient Constructions for One-way Hash Chains

    National Research Council Canada - National Science Library

    Hu, Yih-Chun; Jakobsson, Markus; Perrig, Adrian

    2003-01-01

    .... Our first construction, the Sandwich-chain, provides a smaller bandwidth overhead for one-way chain values, and enables efficient verification of one-way chain values if the trusted one-way chain value is far away...

  10. A Tree Locality-Sensitive Hash for Secure Software Testing

    Science.gov (United States)

    2017-09-14

    state because it provides an interface to useful program information such as symbols information, system calls, memory allocations, and data flow . To...execution states together, this was augmented with TLSH, and a command-line option exposed to select whether to use TLSH. When merging two states , flow ...a topolo- gical search of the control- flow graph (CFG), attempting to merge states at control flow join points. Whether two states are merged or not

  11. On-line Ciphers and the Hash-CBC Constructions

    DEFF Research Database (Denmark)

    Bellare, M.; Boldyreva, A.; Knudsen, Lars Ramkilde

    2012-01-01

    We initiate a study of on-line ciphers. These are ciphers that can take input plaintexts of large and varying lengths and will output the i th block of the ciphertext after having processed only the first i blocks of the plaintext. Such ciphers permit length-preserving encryption of a data stream...... with only a single pass through the data. We provide security definitions for this primitive and study its basic properties. We then provide attacks on some possible candidates, including CBC with fixed IV. We then provide two constructions, HCBC1 and HCBC2, based on a given block cipher E and a family...... of computationally AXU functions. HCBC1 is proven secure against chosen-plaintext attacks assuming that E is a PRP secure against chosen-plaintext attacks, while HCBC2 is proven secure against chosen-ciphertext attacks assuming that E is a PRP secure against chosen-ciphertext attacks....

  12. Metacognition does not imply self-reflection, but it does imply function.

    Science.gov (United States)

    Hoffman, Megan L; Schwartz, Bennett L

    2014-05-01

    Is self-reflection necessary for metacognition to occur? Like Kornell (2014, pp. 143-149), we struggle with this question. If humans metacognition is not always self-reflective, why should we expect animals to be so? We suggest that one way to pursue metacognition in animals is to examine its ecological and evolutionary relevance. ©2014 APA, all rights reserved.

  13. What SMART Technology implies for the industry of the future

    CSIR Research Space (South Africa)

    Annamalai, Leeandran

    2017-10-01

    Full Text Available This presentation discusses how SMART technology can influence the industry of the future. Topics touched on are software enabled machines able to respond relevantly to real world events, the industrial Internet of Things, and machines measurements...

  14. Banal and Implied Forms of Violence in Levinas' Phenomenological Ethics

    OpenAIRE

    Fleurdeliz R. Altez

    2007-01-01

    Despite his final call for peace and "the wisdom of love", Emmanuel Levinas inevitably spoke of violence, and perhaps spoke even more of it. His call for infinite responsibility is actually crystallized through discourse on violence and suffering. We may say that these themes served as catalysts to the standing theory and, ethically, to any responsible Self. Violence, at least as a concept, poses itself as a significant presence to Levinas' plantilla while it reaches unexplored dimensions tha...

  15. Does the evolutionary conservation of microsatellite loci imply function?

    Energy Technology Data Exchange (ETDEWEB)

    Shriver, M.D.; Deka, R.; Ferrell, R.E. [Univ. of Pittsburgh, PA (United States)] [and others

    1994-09-01

    Microsatellites are highly polymorphic tandem arrays of short (1-6 bp) sequence motifs which have been found widely distributed in the genomes of all eukaryotes. We have analyzed allele frequency data on 16 microsatellite loci typed in the great apes (human, chimp, orangutan, and gorilla). The majority of these loci (13) were isolated from human genomic libraries; three were cloned from chimpanzee genomic DNA. Most of these loci are not only present in all apes species, but are polymorphic with comparable levels of heterozygosity and have alleles which overlap in size. The extent of divergence of allele frequencies among these four species were studies using the stepwise-weighted genetic distance (Dsw), which was previously shown to conform to linearity with evolutionary time since divergence for loci where mutations exist in a stepwise fashion. The phylogenetic tree of the great apes constructed from this distance matrix was consistent with the expected topology, with a high bootstrap confidence (82%) for the human/chimp clade. However, the allele frequency distributions of these species are 10 times more similar to each other than expected when they were calibrated with a conservative estimate of the time since separation of humans and the apes. These results are in agreement with sequence-based surveys of microsatellites which have demonstrated that they are highly (90%) conserved over short periods of evolutionary time (< 10 million years) and moderately (30%) conserved over long periods of evolutionary time (> 60-80 million years). This evolutionary conservation has prompted some authors to speculate that there are functional constraints on microsatellite loci. In contrast, the presence of directional bias of mutations with constraints and/or selection against aberrant sized alleles can explain these results.

  16. Modeling Autoregressive Processes with Moving-Quantiles-Implied Nonlinearity

    Directory of Open Access Journals (Sweden)

    Isao Ishida

    2015-01-01

    Full Text Available We introduce and investigate some properties of a class of nonlinear time series models based on the moving sample quantiles in the autoregressive data generating process. We derive a test fit to detect this type of nonlinearity. Using the daily realized volatility data of Standard & Poor’s 500 (S&P 500 and several other indices, we obtained good performance using these models in an out-of-sample forecasting exercise compared with the forecasts obtained based on the usual linear heterogeneous autoregressive and other models of realized volatility.

  17. Lorentz covariance ‘almost’ implies electromagnetism and more

    International Nuclear Information System (INIS)

    Sobouti, Y

    2015-01-01

    Beginning from two simple assumptions, (i) the speed of light is a universal constant, or its equivalent, the spacetime intervals are Lorentz invariant, and (ii) there are mutually interacting particles, with a covariant ‘source-field’ equation, one arrives at a class of field equations of which the standard electromagnetism (EM) and electrodynamics are special cases. The formalism, depending on how one formulates the source-field equation, allows one to speculate magnetic monopoles, massive photons, nonlinear EMs, and more. (paper)

  18. Quantum communication complexity advantage implies violation of a Bell inequality

    NARCIS (Netherlands)

    H. Buhrman (Harry); L. Czekaj (Lłukasz); A. Grudka (Andrzej); M. Horodecki (Michalł); P. Horodecki (Pawelł); M. Markiewicz (Marcin); F. Speelman (Florian); S. Strelchuk (Sergii)

    2016-01-01

    textabstractWe obtain a general connection between a large quantumadvantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that

  19. Do superheavy elements imply the existence of black holes

    International Nuclear Information System (INIS)

    Pringle, J.E.; Dearborn, D.S.P.; Fabian, A.C.

    1976-01-01

    Some comments are offered on the question of where superheavy elements, such as elements 116, 124 and 126, are likely to have been formed. Most of these elements are thought to have been produced under conditions of explosive nucleosynthesis by what is known as the 'r-process', and particularly in conventional supernova explosions, but it is stated that the ability of the r-process to produce superheavy elements is very uncertain, and the conditions necessary for synthesis of these elements are difficult to realise in astrophysical situations. It is thought that superheavy elements exist in the outer layers of neutron stars, and ideal conditions for the production of superheavy nuclei, such as high neutron flux and rapid β decays, occur in the disruption of a neutron star. Such disruption is possible in two ways, both of which involve a black hole. It is likely that a neutron star is disrupted when it accretes sufficient material for its mass to exceed the maximum mass for stability, and it then has no alternative but to collapse to form a black hole and it seems possible that some of the outer layers are thrown off during the process. It is thus argued that the most likely site for the production of superheavy elements is in the surface layers of a neutron star, and the most plausible means by which these layers can be returned to the interstellar medium involves the intervention or formation of a black hole. (U.K.)

  20. Green product development : What does the country product space imply?

    NARCIS (Netherlands)

    Fraccascia, Luca; Giannoccaro, Ilaria; Albino, Vito

    This paper contributes to green product development by identifying the green products with the highest potential for growth in a country. To address our aim, we use the concept of product proximity and product space and, borrowing from the results of recent studies on complexity economics, we

  1. School Finance Reform: Do Equalized Expenditures Imply Equalized Teacher Salaries?

    Science.gov (United States)

    Streams, Meg; Butler, J. S.; Cowen, Joshua; Fowles, Jacob; Toma, Eugenia F.

    2011-01-01

    Kentucky is a poor, relatively rural state that contrasts greatly with the relatively urban and wealthy states typically the subject of education studies employing large-scale administrative data. For this reason, Kentucky's experience of major school finance and curricular reform is highly salient for understanding teacher labor market dynamics.…

  2. Modeling of CPDOs - Identifying Optimal and Implied Leverage

    DEFF Research Database (Denmark)

    Dorn, Jochen

    famous notably by Standard & Poor's rating model error which illustrated that closed-form analytical pricing is necessary in order to evaluate and understand complex derivatives. This article aims to shed a light on CPDOs specific structural enhancements and mechanisms. The author quantifies inherent...... risks and provides a dynamic closed-form pricing formula....

  3. Modeling of CPDOs - Identifying optimal and implied leverage

    DEFF Research Database (Denmark)

    Dorn, Jochen

    2010-01-01

    by Standard & Poor's rating model error which illustrated that closed-form analytical pricing is necessary in order to evaluate and understand complex derivatives. This article aims to shed a light on CPDOs' specific structural enhancements and mechanisms. We quantify inherent risks and provide a dynamic...

  4. Modeling of CPDOs - Identifying Optimal and Implied Leverage

    DEFF Research Database (Denmark)

    Dorn, Jochen

    When the subprime crisis started emerging, collateralized products based on Credit Default Swap (CDS) exposures combined with security features seemed to be a more rational alternative to classic asset backed securities. Constant Proportion Collateralized Debt Obligations (CPDOs) are a mixture...... risks and provides a dynamic closed-form pricing formula....

  5. I Fought the Law: Transgressive Play and The Implied Player

    DEFF Research Database (Denmark)

    Aarseth, Espen J.

    2007-01-01

    This paper is an attempt to understand Game Studies through the contested notion of the “player” both inside and outside “the game object” – that is the object that game users perceive and respond to when they play. Building on Hans-Georg Gadamer’s notion of games as a subject that “masters...... the players”, the paper will go beyond the traditional split between the social sciences’ real players and the aesthetics/humanities critical author-as-player, and present a theory of the player and player studies that incorporates the complex tensions between the real, historical player and the game’s human...... components. Since games are both aesthetic and social phenomena, a theory of the player must combine both social and aesthetic perspectives to be successful. The tension between the humanities and the social sciences over who controls the idea of the player can be found mirrored also in the struggle between...

  6. The Phenomena Implied by the New Economy from Statistics Perspective

    Directory of Open Access Journals (Sweden)

    Giani GRADINARU

    2006-01-01

    Full Text Available The rapidity of the informational society to transform into an information and knowledge society determines a perspective on the New Economy which would consider the Internet market and the effect of the Internet information on all economic agents, and the effect of knowledge as economic factor, which imposes the recognition of intangible goods, in general, in the making of economic value as well as the requirements for achieving a lasting society, which cannot be possible but inside the knowledge society, and which would impose in the society economy new technologies, and most important, changes in orientation according to the classic economic thinking.

  7. Chaotic expression dynamics implies pluripotency: when theory and experiment meet

    Directory of Open Access Journals (Sweden)

    Furusawa Chikara

    2009-05-01

    Full Text Available Abstract Background During normal development, cells undergo a unidirectional course of differentiation that progressively decreases the number of cell types they can potentially become. Pluripotent stem cells can differentiate into several types of cells, but terminally differentiated cells cannot differentiate any further. A fundamental problem in stem cell biology is the characterization of the difference in cellular states, e.g., gene expression profiles, between pluripotent stem cells and terminally differentiated cells. Presentation of the hypothesis To address the problem, we developed a dynamical systems model of cells with intracellular protein expression dynamics and interactions with each other. According to extensive simulations, cells with irregular (chaotic oscillations in gene expression dynamics have the potential to differentiate into other cell types. During development, such complex oscillations are lost successively, leading to a loss of pluripotency. These simulation results, together with recent single-cell-level measurements in stem cells, led us to the following hypothesis regarding pluripotency: Chaotic oscillation in the expression of some genes leads to cell pluripotency and affords cellular state heterogeneity, which is supported by itinerancy over quasi-stable states. Differentiation stabilizes these states, leading to a loss of pluripotency. Testing the hypothesis To test the hypothesis, it is crucial to measure the time course of gene expression levels at the single-cell level by fluorescence microscopy and fluorescence-activated cell sorting (FACS analysis. By analyzing the time series of single-cell-level expression data, one can distinguish whether the variation in protein expression level over time is due only to stochasticity in expression dynamics or originates from the chaotic dynamics inherent to cells, as our hypothesis predicts. By further analyzing the expression in differentiated cell types, one can examine whether the loss of pluripotency is accompanied by a loss of oscillation. Implications of the hypothesis Recovery of pluripotency from determined cells is a long-standing aspiration, from both scientific and clinical perspectives. Our hypothesis suggests a feasible route to recover the potential to differentiate, i.e., by increasing the variety of expressed genes to restore chaotic expression dynamics, as is consistent with the recent generation of induced pluripotent stem (iPS cells. Reviewers This article was reviewed by David Krakauer, Jeroen van Zon (nominated by Rob de Boer, and Williams S. Hlavacek.

  8. Quantum communication complexity advantage implies violation of a Bell inequality

    Science.gov (United States)

    Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii

    2016-01-01

    We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600

  9. Does boundary quantum mechanics imply quantum mechanics in the bulk?

    Science.gov (United States)

    Kabat, Daniel; Lifschytz, Gilad

    2018-03-01

    Perturbative bulk reconstruction in AdS/CFT starts by representing a free bulk field ϕ (0) as a smeared operator in the CFT. A series of 1 /N corrections must be added to ϕ (0) to represent an interacting bulk field ϕ. These corrections have been determined in the literature from several points of view. Here we develop a new perspective. We show that correlation functions involving ϕ (0) suffer from ambiguities due to analytic continuation. As a result ϕ (0) fails to be a well-defined linear operator in the CFT. This means bulk reconstruction can be understood as a procedure for building up well-defined operators in the CFT which thereby singles out the interacting field ϕ. We further propose that the difficulty with defining ϕ (0) as a linear operator can be re-interpreted as a breakdown of associativity. Presumably ϕ (0) can only be corrected to become an associative operator in perturbation theory. This suggests that quantum mechanics in the bulk is only valid in perturbation theory around a semiclassical bulk geometry.

  10. Implied Stopping Rules for American Basket Options from Markovian Projection

    KAUST Repository

    Bayer, Christian; Hä ppö lä , Juho; Tempone, Raul

    2017-01-01

    This work addresses the problem of pricing American basket options in a multivariate setting, which includes among others, the Bachelier and the Black-Scholes models. In high dimensions, nonlinear partial differential equation methods for solving the problem become prohibitively costly due to the curse of dimensionality. Instead, this work proposes to use a stopping rule that depends on the dynamics of a low-dimensional Markovian projection of the given basket of assets. It is shown that the ability to approximate the original value function by a lower-dimensional approximation is a feature of the dynamics of the system and is unaffected by the path-dependent nature of the American basket option. Assuming that we know the density of the forward process and using the Laplace approximation, we first efficiently evaluate the diffusion coefficient corresponding to the low-dimensional Markovian projection of the basket. Then, we approximate the optimal early-exercise boundary of the option by solving a Hamilton-Jacobi-Bellman partial differential equation in the projected, low-dimensional space. The resulting near-optimal early-exercise boundary is used to produce an exercise strategy for the high-dimensional option, thereby providing a lower bound for the price of the American basket option. A corresponding upper bound is also provided. These bounds allow to assess the accuracy of the proposed pricing method. Indeed, our approximate early-exercise strategy provides a straightforward lower bound for the American basket option price. Following a duality argument due to Rogers, we derive a corresponding upper bound solving only the low-dimensional optimal control problem. Numerically, we show the feasibility of the method using baskets with dimensions up to fifty. In these examples, the resulting option price relative errors are only of the order of few percent.

  11. Banal and Implied Forms of Violence in Levinas' Phenomenological Ethics

    Directory of Open Access Journals (Sweden)

    Fleurdeliz R. Altez

    2007-06-01

    Full Text Available Despite his final call for peace and "the wisdom of love", Emmanuel Levinas inevitably spoke of violence, and perhaps spoke even more of it. His call for infinite responsibility is actually crystallized through discourse on violence and suffering. We may say that these themes served as catalysts to the standing theory and, ethically, to any responsible Self. Violence, at least as a concept, poses itself as a significant presence to Levinas' plantilla while it reaches unexplored dimensions that await phenomenology and vital thought. As a part of his ethical proposal, understanding violence becomes important so that the Self may go beyond it while reaching the Other.

  12. Pattern overlap implies runaway growth in hierarchical tile systems

    Directory of Open Access Journals (Sweden)

    David Doty

    2015-11-01

    Full Text Available We show that in the hierarchical tile assembly model, if there is a producible assembly that overlaps a nontrivial translation of itself consistently (i.e., the pattern of tile types in the overlap region is identical in both translations, then arbitrarily large assemblies are producible. The significance of this result is that tile systems intended to controllably produce finite structures must avoid pattern repetition in their producible assemblies that would lead to such overlap.This answers an open question of Chen and Doty (SODA 2012, who showed that so-called "partial-order" systems producing a unique finite assembly and avoiding such overlaps must require time linear in the assembly diameter. An application of our main result is that any system producing a unique finite assembly is automatically guaranteed to avoid such overlaps, simplifying the hypothesis of Chen and Doty's main theorem.

  13. Spacelike penguin diagram effects in B implies PP decays

    International Nuclear Information System (INIS)

    Du, D.; Yang, M.; Zhang, D.

    1996-01-01

    The spacelike penguin diagram contributions to branching ratios and CP asymmetries in charmless decays of B to two pseudoscalar mesons are studied using the next-to-leading order low energy effective Hamiltonian. Both the gluonic penguin and the electroweak penguin diagrams are considered. We find that the effects are significant. copyright 1995 The American Physical Society

  14. Computer Simulations Imply Forelimb-Dominated Underwater Flight in Plesiosaurs.

    Directory of Open Access Journals (Sweden)

    Shiqiu Liu

    2015-12-01

    Full Text Available Plesiosaurians are an extinct group of highly derived Mesozoic marine reptiles with a global distribution that spans 135 million years from the Early Jurassic to the Late Cretaceous. During their long evolutionary history they maintained a unique body plan with two pairs of large wing-like flippers, but their locomotion has been a topic of debate for almost 200 years. Key areas of controversy have concerned the most efficient biologically possible limb stroke, e.g. whether it consisted of rowing, underwater flight, or modified underwater flight, and how the four limbs moved in relation to each other: did they move in or out of phase? Previous studies have investigated plesiosaur swimming using a variety of methods, including skeletal analysis, human swimmers, and robotics. We adopt a novel approach using a digital, three-dimensional, articulated, free-swimming plesiosaur in a simulated fluid. We generated a large number of simulations under various joint degrees of freedom to investigate how the locomotory repertoire changes under different parameters. Within the biologically possible range of limb motion, the simulated plesiosaur swims primarily with its forelimbs using an unmodified underwater flight stroke, essentially the same as turtles and penguins. In contrast, the hindlimbs provide relatively weak thrust in all simulations. We conclude that plesiosaurs were forelimb-dominated swimmers that used their hind limbs mainly for maneuverability and stability.

  15. Dark matter properties implied by gamma ray interstellar emission models

    Energy Technology Data Exchange (ETDEWEB)

    Balázs, Csaba; Li, Tong, E-mail: csaba.balazs@monash.edu, E-mail: tong.li@monash.edu [ARC Centre of Excellence for Particle Physics at the Tera-scale, School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia)

    2017-02-01

    We infer dark matter properties from gamma ray residuals extracted using eight different interstellar emission scenarios proposed by the Fermi-LAT Collaboration to explain the Galactic Center gamma ray excess. Adopting the most plausible simplified ansatz, we assume that the dark matter particle is a Majorana fermion interacting with standard fermions via a scalar mediator. To trivially respect flavor constraints, we only couple the mediator to third generation fermions. Using this theoretical hypothesis, and the Fermi residuals, we calculate Bayesian evidences, including Fermi-LAT exclusion limits from 15 dwarf spheroidal galaxies as well. Our evidence ratios single out one of the Fermi scenarios as most compatible with the simplified dark matter model. In this scenario the dark matter (mediator) mass is in the 25-200 (1-1000) GeV range and its annihilation is dominated by bottom quark final state. Our conclusion is that the properties of dark matter extracted from gamma ray data are highly sensitive to the modeling of the interstellar emission.

  16. Does Dual Ownership of Waste Imply a Regional Disposal Approach?

    International Nuclear Information System (INIS)

    Mele, I.

    2006-01-01

    The construction of the Nuclear Power Plant Krsko, being located in Slovenia near the Slovenian-Croatian border, was a joint investment by Slovenia and Croatia, two republics of the former Yugoslavia. The plant was completed in 1981 and the commercial operation started early in 1983. The obligations and rights of both investors during the construction and operation were specified in two bilateral contracts signed in 1974 and 1982. These contracts were fairly detailed on construction, operation and exploitation of the nuclear power plant (NPP), but they said very little about future nuclear liabilities. The electricity production was equally shared between the two countries and both parties participated in management of the NPP. In 1991, after Slovenia and Croatia became two independent countries, the agreement on the ownership and exploitation of the NPP Krsko was re-negotiated and a new contract signed in 2003. By the new contract the decommissioning and the disposal of spent fuel (SF) as well as low and intermediate level waste (LILW) is the responsibility of both parties, and the financial resources for covering these liabilities should be equally provided. Regardless of shared ownership of waste, the agreement opts for a single disposal solution for LILW as well as for SF, but the details are left open. More clear elaboration of these responsibilities is given in the programme of the decommissioning and disposal of radioactive waste from the NPP which was jointly prepared by the Slovenian and Croatian waste management organisations in 2004. The programme is clearly opting for only one repository for LILW and one repository for spent fuel, which can be located either in Slovenia or Croatia. Irrespective of the country where such a repository will be sited, dual ownership of waste opens up another dimension of such a solution: will such a repository be regarded as a national facility or as a regional or multinational facility? Both options-national and regional/multinational- may have a strong influence on future agreements on waste disposal, but so far these aspects have not been addressed either in Slovenia or Croatia. The paper brings reflections and discussion on these aspects of waste management in Slovenia and reveals the current situation of the waste disposal project in the country. (authors)

  17. Functional vs. Structural Modularity: do they imply each other?

    Science.gov (United States)

    Toroczkai, Zoltan

    2009-03-01

    While many deterministic and stochastic processes have been proposed to produce heterogeneous graphs mimicking real-world networks, only a handful of studies attempt to connect structure and dynamics with the function(s) performed by the network. In this talk I will present an approach built on the premise that structure, dynamics, and their observed heterogeneity, are implementations of various functions and their compositions. After a brief review of real-world networks where this connection can explicitly be made, I will focus on biological networks. Biological networks are known to possess functionally specialized modules, which perform tasks almost independently of each other. While proposals have been made for the evolutionary emergence of modularity, it is far from clear that adaptation on evolutionary timescales is the sole mechanism leading to functional specialization. We show that non-evolutionary learning can also lead to the formation of functionally specialized modules in a system exposed to multiple environmental constraints. A natural example suggesting that this is possible is the cerebral cortex, where there are clearly delineated functional areas in spite of the largely uniform anatomical construction of the cortical tissue. However, as numerous experiments show, when damaged, regions specialized for a certain function can be retrained to perform functions normally attributed to other regions. We use the paradigm of neural networks to represent a multitasking system, and use several non-evolutionary learning algorithms as mechanisms for phenotypic adaptation. We show that for a network learning to perform multiple tasks, the degree of independence between the tasks dictates the degree of functional specialization emerging in the network. To uncover the functional modules, we introduce a method of node knockouts that explicitly rates the contribution of each node to different tasks (differential robustness). Through a concrete example we also demonstrate the potential inability of purely topology-based clustering methods to detect functional modules. The robustness of these results suggests that similar mechanisms might be responsible for the emergence of functional specialization in other multitasking networks, as well, including social networks.

  18. INTRODUCTION Family planning implies the ability of individuals ...

    African Journals Online (AJOL)

    about 25% of women who have abortion in Nigeria ... Keywords: Family planning, awareness, pregnant women, Nigeria. Annals of Ibadan .... Washington D.C. World bank 1987;52. 6. ... 2007;1(1) : Accessed on line on 7th March,2008. 19.

  19. Implied...or implode? The Simpsons' carnivalesque Treehouse of Horror

    OpenAIRE

    Jones, Steve

    2010-01-01

    Since 1990, The Simpsons’ annual “Treehouse of Horror” episodes have constituted a production sub-context within the series, having their own conventions and historical trajectory. These specials incorporate horror plots and devices, as well as general references to science fiction, into the series’ base in situation comedy. The Halloween specials disrupt the series usual family-oriented sitcom structure, dissolving the ideological balances that stabilise that society. By depicting the Family...

  20. Quantum communication complexity advantage implies violation of a Bell inequality

    NARCIS (Netherlands)

    H. Buhrman (Harry); L. Czekaj (Lłukasz); A. Grudka (Andrzej); M. Horodecki (Michalł); P. Horodecki (Pawelł); M. Markiewicz (Marcin); F. Speelman (Florian); S. Strelchuk (Sergii)

    2015-01-01

    htmlabstractWe obtain a general connection between a quantum advantage in communication complexity and non-locality. We show that given any protocol offering a (sufficiently large) quantum advantage in communication complexity, there exists a way of obtaining measurement statistics which violate

  1. Generic antibiotic industries: Challenges and implied strategies with regulatory perspectives

    Directory of Open Access Journals (Sweden)

    M Venkatesh

    2011-01-01

    Full Text Available Ever since the discovery of antibiotics, the quality of human life greatly improved in the 20 th century. The discovery of penicillin transformed the medicine industry and initiated a search for a better antibiotic every time resulting in several synthetic and semi-synthetic antibiotics. Beginning with the 1937 sulfa drug tragedy, the drug regulations had a parallel growth along with the antibiotics and the antibiotic-based generic Pharma industries. This review article is focused on the scenario depicting current global Pharma industries based on generic antibiotics. Several regulatory aspects involved with these industries have been discussed along with the complexity of the market, issues that could affect their growth, their struggle for quality, and their compliance with the tightened regulations. With the skyrocketing commercialization of antibiotics through generics and the leveraging technologic renaissance, generic industries are involved in providing maximum safer benefits for the welfare of the people, highlighting its need today.

  2. Does implied community size predict likeability of a similar stranger?

    Science.gov (United States)

    Launay, Jacques; Dunbar, Robin I M

    2015-01-01

    Homophily, the tendency for people to cluster with similar others, has primarily been studied in terms of proximal, psychological causes, such as a tendency to have positive associations with people who share traits with us. Here we investigate whether homophily could be correlated with perceived group membership, given that sharing traits with other people might signify membership of a specific community. In order to investigate this, we tested whether the amount of homophily that occurs between strangers is dependent on the number of people they believe share the common trait (i.e. the size of group that the trait identifies). In two experiments, we show that more exclusive (smaller) groups evoke more positive ratings of the likeability of a stranger. When groups appear to be too inclusive (i.e. large) homophily no longer occurs, suggesting that it is not only positive associations with a trait that cause homophily, but a sense of the exclusiveness of a group is also important. These results suggest that group membership based on a variety of traits can encourage cohesion between people from diverse backgrounds, and may be a useful tool in overcoming differences between groups.

  3. Experimental signature of scaling violation implied by field theories

    International Nuclear Information System (INIS)

    Tung, W.

    1975-01-01

    Renormalizable field theories are found to predict a surprisingly specific pattern of scaling violation in deep inelastic scattering. Comparison with experiments is discussed. The feasibility of distinguishing asymptotically free field theories from conventional field theories is evaluated

  4. Modeling of CPDOs - Identifying Optimal and Implied Leverage

    DEFF Research Database (Denmark)

    Dorn, Jochen

    When the subprime crisis started emerging, collateralized products based on Credit Default Swap (CDS) exposures combined with security features seemed to be a more rational alternative to classic asset backed securities. Constant Proportion Collateralized Debt Obligations (CPDOs) are a mixture...... of Collateralized Debt Obligations (CDOs) and CPPIs with inverse mechanism. This new asset targets to meet the investors' demand for credit derivatives with security enhancements, but quantitative approaches for pricing except for simulation algorithms do not exist yet up to he author's knowledge. CPDOs became...... risks and provides a dynamic closed-form pricing formula....

  5. Implied Dynamics Biases the Visual Perception of Velocity

    Science.gov (United States)

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform. PMID:24667578

  6. Implied dynamics biases the visual perception of velocity.

    Directory of Open Access Journals (Sweden)

    Barbara La Scaleia

    Full Text Available We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum, horizontally leftward, or vertically upward (upside-down. In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3 or rectilinear (Experiment 4 and 5 paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5. The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.

  7. Implied dynamics biases the visual perception of velocity.

    Science.gov (United States)

    La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo

    2014-01-01

    We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.

  8. Maslow's Implied Matrix: A Clarification of the Need Hierarchy Theory.

    Science.gov (United States)

    Marsh, Edward

    1978-01-01

    Maslow's need hierarchy theory is restated by means of a matrix arrangement of the constructs within the theory. After consideration of the consequences of this restatement, some significant research is discussed and directions for future research suggested. (Author)

  9. Generic antibiotic industries: Challenges and implied strategies with regulatory perspectives

    Science.gov (United States)

    Venkatesh, M.; Bairavi, V. G.; Sasikumar, K. C.

    2011-01-01

    Ever since the discovery of antibiotics, the quality of human life greatly improved in the 20th century. The discovery of penicillin transformed the medicine industry and initiated a search for a better antibiotic every time resulting in several synthetic and semi-synthetic antibiotics. Beginning with the 1937 sulfa drug tragedy, the drug regulations had a parallel growth along with the antibiotics and the antibiotic-based generic Pharma industries. This review article is focused on the scenario depicting current global Pharma industries based on generic antibiotics. Several regulatory aspects involved with these industries have been discussed along with the complexity of the market, issues that could affect their growth, their struggle for quality, and their compliance with the tightened regulations. With the skyrocketing commercialization of antibiotics through generics and the leveraging technologic renaissance, generic industries are involved in providing maximum safer benefits for the welfare of the people, highlighting its need today.. PMID:21430959

  10. The implied producer investigating an emergent typology in participatory culture

    DEFF Research Database (Denmark)

    Søndergaard, Morten

    2012-01-01

    Biennales, that we are not 'just' dealing with a 'new' genre or style within the art category; on the other hand we are not dealing with a pure commercial culture either (the abstract notion of 'the user' has its limits); what is becoming evident is that the 'implicit' roles of the participatory 'actors...

  11. Implied Stopping Rules for American Basket Options from Markovian Projection

    KAUST Repository

    Bayer, Christian

    2017-05-01

    This work addresses the problem of pricing American basket options in a multivariate setting, which includes among others, the Bachelier and the Black-Scholes models. In high dimensions, nonlinear partial differential equation methods for solving the problem become prohibitively costly due to the curse of dimensionality. Instead, this work proposes to use a stopping rule that depends on the dynamics of a low-dimensional Markovian projection of the given basket of assets. It is shown that the ability to approximate the original value function by a lower-dimensional approximation is a feature of the dynamics of the system and is unaffected by the path-dependent nature of the American basket option. Assuming that we know the density of the forward process and using the Laplace approximation, we first efficiently evaluate the diffusion coefficient corresponding to the low-dimensional Markovian projection of the basket. Then, we approximate the optimal early-exercise boundary of the option by solving a Hamilton-Jacobi-Bellman partial differential equation in the projected, low-dimensional space. The resulting near-optimal early-exercise boundary is used to produce an exercise strategy for the high-dimensional option, thereby providing a lower bound for the price of the American basket option. A corresponding upper bound is also provided. These bounds allow to assess the accuracy of the proposed pricing method. Indeed, our approximate early-exercise strategy provides a straightforward lower bound for the American basket option price. Following a duality argument due to Rogers, we derive a corresponding upper bound solving only the low-dimensional optimal control problem. Numerically, we show the feasibility of the method using baskets with dimensions up to fifty. In these examples, the resulting option price relative errors are only of the order of few percent.

  12. On cosmic censorship: do compact Cauchy horizons imply symmetry?

    International Nuclear Information System (INIS)

    Isenberg, J.; Moncrief, V.

    1983-01-01

    The basic idea of Cosmic Censorship is that, in a physically reasonable spacetime, an observer should not encounter any naked singularities. The authors discuss some new results which provide strong support for one of the statements of Cosmic Censorship: Strong Cosmic Censorship says that the maximal spacetime development of a set of Cauchy data on a spacelike initial surface (evolved via the vacuum Einstein equations, the Einstein-Maxwell equations, or some other 'reasonable' set) will not be extendible across a Cauchy horizon. (Auth.)

  13. Hubble's Law Implies Benford's Law for Distances to Galaxies ...

    Indian Academy of Sciences (India)

    in both time and space, predicts that conformity to Benford's law will improve as more data on distances to galaxies becomes available. Con- versely, with the logical derivation of this law presented here, the recent empirical observations may beviewed as independent evidence of the validity of Hubble's law. Key words.

  14. Implied motion language can influence visual spatial memory

    NARCIS (Netherlands)

    Vinson, David; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it

  15. Drunk driving, implied consent, and self-incrimination.

    Science.gov (United States)

    Ogundipe, Kehinde A; Weiss, Kenneth J

    2009-01-01

    The effects of drunk driving are a significant risk to public health and safety. Accordingly, the federal government and the states have enacted laws that permit law enforcement to identify offenders and to apply various levels of sanctions. There is no constitutional requirement that evidence of drunkenness be permitted in defense of criminal behavior. In practice, citizens who undertake to operate motor vehicles under the influence of alcohol are considered reckless per se and have no right to obstruct law enforcement in determining their condition. Indeed, refusal of roadside sobriety tests, including the Breathalyzer, may be considered a separate offense. The issuing of Miranda-type warnings by police officers has been ruled on recently in New Jersey. In a superior court appellate decision, State v. Spell, the court outlined the necessary procedures, concluding that, although motorists have no right to refuse testing, police officers have an obligation to issue sufficient warnings before the motorist decides how to proceed. In the Spell matter, the defendant incriminated himself by refusing the testing, even though he was acquitted on the drunk-driving charge. The authors discuss the role of expert testimony in these matters.

  16. Finite Correlation Length Implies Efficient Preparation of Quantum Thermal States

    Science.gov (United States)

    Brandão, Fernando G. S. L.; Kastoryano, Michael J.

    2018-05-01

    Preparing quantum thermal states on a quantum computer is in general a difficult task. We provide a procedure to prepare a thermal state on a quantum computer with a logarithmic depth circuit of local quantum channels assuming that the thermal state correlations satisfy the following two properties: (i) the correlations between two regions are exponentially decaying in the distance between the regions, and (ii) the thermal state is an approximate Markov state for shielded regions. We require both properties to hold for the thermal state of the Hamiltonian on any induced subgraph of the original lattice. Assumption (ii) is satisfied for all commuting Gibbs states, while assumption (i) is satisfied for every model above a critical temperature. Both assumptions are satisfied in one spatial dimension. Moreover, both assumptions are expected to hold above the thermal phase transition for models without any topological order at finite temperature. As a building block, we show that exponential decay of correlation (for thermal states of Hamiltonians on all induced subgraphs) is sufficient to efficiently estimate the expectation value of a local observable. Our proof uses quantum belief propagation, a recent strengthening of strong sub-additivity, and naturally breaks down for states with topological order.

  17. When I cut, you choose method implies intransitivity

    Science.gov (United States)

    Makowski, Marcin; Piotrowski, Edward W.

    2014-12-01

    There is a common belief that humans and many animals follow transitive inference (choosing A over C on the basis of knowing that A is better than B and B is better than C). Transitivity seems to be the essence of rational choice. We present a theoretical model of a repeated game in which the players make a choice between three goods (e.g. food). The rules of the game refer to the simple procedure of fair division among two players, known as the “I cut, you choose” mechanism which has been widely discussed in the literature. In this game one of the players has to make intransitive choices in order to achieve the optimal result (for him/her and his/her co-player). The point is that an intransitive choice can be rational. Previously, an increase in the significance of intransitive strategies was achieved by referring to models of quantum games. We show that relevant intransitive strategies also appear in the classic description of decision algorithms.

  18. Implied Volatility and Rebalancing Timing: Market cycles and the relationship between implied volatility indices and stock index returns.

    OpenAIRE

    Holst, Niklas; Rønning, Harald

    2017-01-01

    Master's thesis in Finance This paper studies market timing based upon the level of the VIX and compares VIX based portfolio rotations with modern rebalancing practices. The thesis analyses the outcomes of the different rebalancing and trading strategies, and compare them through different performance measures. The study finds that on average, rebalancing strategies based on the level of the VIX does not have significant positive returns compared to the standard dynamic rebalancing scheme ...

  19. DESYNC: Self-Organizing Desynchronization and TDMA on Wireless Sensor Networks

    OpenAIRE

    Degesys, Julius; Rose, Ian; Patel, Ankit; Nagpal, Radhika

    2006-01-01

    Desynchronization is a novel primitive for sensor networks: it implies that nodes perfectly interleave periodic events to occur in a round-robin schedule. This primitive can be used to evenly distribute sampling burden in a group of nodes, schedule sleep cycles, or organize a collision-free TDMA schedule for transmitting wireless messages. Here we present Desync, a biologically-inspired self-maintaining algorithm for desynchronization in a single-hop network. We present (1) theoretical result...

  20. Churn-Resilient Replication Strategy for Peer-to-Peer Distributed Hash-Tables

    Science.gov (United States)

    Legtchenko, Sergey; Monnet, Sébastien; Sens, Pierre; Muller, Gilles

    DHT-based P2P systems provide a fault-tolerant and scalable mean to store data blocks in a fully distributed way. Unfortunately, recent studies have shown that if connection/disconnection frequency is too high, data blocks may be lost. This is true for most current DHT-based system's implementations. To avoid this problem, it is necessary to build really efficient replication and maintenance mechanisms. In this paper, we study the effect of churn on an existing DHT-based P2P system such as DHash or PAST. We then propose solutions to enhance churn tolerance and evaluate them through discrete event simulations.

  1. Optimizing Hash-Array Mapped Tries for Fast and Lean Immutable JVM Collections

    NARCIS (Netherlands)

    M.J. Steindorfer (Michael); J.J. Vinju (Jurgen)

    2015-01-01

    textabstractThe data structures under-pinning collection API (e.g. lists, sets, maps) in the standard libraries of programming languages are used intensively in many applications. The standard libraries of recent Java Virtual Machine languages, such as Clojure or Scala, contain scalable and

  2. Using pseudo-random number generator for making iterative algorithms of hashing data

    International Nuclear Information System (INIS)

    Ivanov, M.A.; Vasil'ev, N.P.; Kozyrskij, B.L.

    2014-01-01

    The method of stochastic data transformation made for usage in cryptographic methods of information protection has been analyzed. The authors prove the usage of cryptographically strong pseudo-random number generators as a basis for Sponge construction. This means that the analysis of the quality of the known methods and tools for assessing the statistical security of pseudo-random number generators can be used effectively [ru

  3. A Novel Secure Image Hashing Based on Reversible Watermarking for Forensic Analysis

    OpenAIRE

    Doyoddorj, Munkhbaatar; Rhee, Kyung-Hyune

    2011-01-01

    Part 2: Workshop; International audience; Nowadays, digital images and videos have become increasingly popular over the Internet and bring great social impact to a wide audience. In the meanwhile, technology advancement allows people to easily alter the content of digital multimedia and brings serious concern on the trustworthiness of online multimedia information. In this paper, we propose a new framework for multimedia forensics by using compact side information based on reversible watermar...

  4. Security enhanced multi-factor biometric authentication scheme using bio-hash function.

    Directory of Open Access Journals (Sweden)

    Younsung Choi

    Full Text Available With the rapid development of personal information and wireless communication technology, user authentication schemes have been crucial to ensure that wireless communications are secure. As such, various authentication schemes with multi-factor authentication have been proposed to improve the security of electronic communications. Multi-factor authentication involves the use of passwords, smart cards, and various biometrics to provide users with the utmost privacy and data protection. Cao and Ge analyzed various authentication schemes and found that Younghwa An's scheme was susceptible to a replay attack where an adversary masquerades as a legal server and a user masquerading attack where user anonymity is not provided, allowing an adversary to execute a password change process by intercepting the user's ID during login. Cao and Ge improved upon Younghwa An's scheme, but various security problems remained. This study demonstrates that Cao and Ge's scheme is susceptible to a biometric recognition error, slow wrong password detection, off-line password attack, user impersonation attack, ID guessing attack, a DoS attack, and that their scheme cannot provide session key agreement. Then, to address all weaknesses identified in Cao and Ge's scheme, this study proposes a security enhanced multi-factor biometric authentication scheme and provides a security analysis and formal analysis using Burrows-Abadi-Needham logic. Finally, the efficiency analysis reveals that the proposed scheme can protect against several possible types of attacks with only a slightly high computational cost.

  5. Secured Hash Based Burst Header Authentication Design for Optical Burst Switched Networks

    Science.gov (United States)

    Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.

    2017-12-01

    The optical burst switching (OBS) is a promising technology that could meet the fast growing network demand. They are featured with the ability to meet the bandwidth requirement of applications that demand intensive bandwidth. OBS proves to be a satisfactory technology to tackle the huge bandwidth constraints, but suffers from security vulnerabilities. The objective of this proposed work is to design a faster and efficient burst header authentication algorithm for core nodes. There are two important key features in this work, viz., header encryption and authentication. Since the burst header is an important in optical burst switched network, it has to be encrypted; otherwise it is be prone to attack. The proposed MD5&RC4-4S based burst header authentication algorithm runs 20.75 ns faster than the conventional algorithms. The modification suggested in the proposed RC4-4S algorithm gives a better security and solves the correlation problems between the publicly known outputs during key generation phase. The modified MD5 recommended in this work provides 7.81 % better avalanche effect than the conventional algorithm. The device utilization result also shows the suitability of the proposed algorithm for header authentication in real time applications.

  6. A novel hash based least significant bit (2-3-3) image steganography in spatial domain

    OpenAIRE

    Manjula, G. R.; Danti, Ajit

    2015-01-01

    This paper presents a novel 2-3-3 LSB insertion method. The image steganography takes the advantage of human eye limitation. It uses color image as cover media for embedding secret message.The important quality of a steganographic system is to be less distortive while increasing the size of the secret message. In this paper a method is proposed to embed a color secret image into a color cover image. A 2-3-3 LSB insertion method has been used for image steganography. Experimental results show ...

  7. Key rate of quantum key distribution with hashed two-way classical communication

    International Nuclear Information System (INIS)

    Watanabe, Shun; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Kawano, Yasuhito

    2007-01-01

    We propose an information reconciliation protocol that uses two-way classical communication. The key rates of quantum key distribution (QKD) protocols that use our protocol are higher than those using previously known protocols for a wide range of error rates for the Bennett-Brassard 1984 and six-state protocols. We also clarify the relation between the proposed and known QKD protocols, and the relation between the proposed protocol and entanglement distillation protocols

  8. Security enhanced multi-factor biometric authentication scheme using bio-hash function

    Science.gov (United States)

    Lee, Youngsook; Moon, Jongho

    2017-01-01

    With the rapid development of personal information and wireless communication technology, user authentication schemes have been crucial to ensure that wireless communications are secure. As such, various authentication schemes with multi-factor authentication have been proposed to improve the security of electronic communications. Multi-factor authentication involves the use of passwords, smart cards, and various biometrics to provide users with the utmost privacy and data protection. Cao and Ge analyzed various authentication schemes and found that Younghwa An’s scheme was susceptible to a replay attack where an adversary masquerades as a legal server and a user masquerading attack where user anonymity is not provided, allowing an adversary to execute a password change process by intercepting the user’s ID during login. Cao and Ge improved upon Younghwa An’s scheme, but various security problems remained. This study demonstrates that Cao and Ge’s scheme is susceptible to a biometric recognition error, slow wrong password detection, off-line password attack, user impersonation attack, ID guessing attack, a DoS attack, and that their scheme cannot provide session key agreement. Then, to address all weaknesses identified in Cao and Ge’s scheme, this study proposes a security enhanced multi-factor biometric authentication scheme and provides a security analysis and formal analysis using Burrows-Abadi-Needham logic. Finally, the efficiency analysis reveals that the proposed scheme can protect against several possible types of attacks with only a slightly high computational cost. PMID:28459867

  9. A STUDY ON MULTIFACTOR AUTHENTICATION MODEL USING FINGERPRINT HASH CODE, PASSWORD AND OTP

    OpenAIRE

    K. Krishna Prasad; P. S. Aithal

    2018-01-01

    By definition, Authentication is using one or multiple mechanisms to show that you are who you claim to be. As soon as the identity of the human or machine is demonstrated, then human or machine is authorized to grant some services. The modern research study reveals that fingerprint is not so secured like secured a password which consists of alphanumeric characters, number and special characters. Fingerprints are left at crime places, on materials or at the door which is usually class of late...

  10. Distributed Kernelized Locality-Sensitive Hashing for Faster Image Based Navigation

    Science.gov (United States)

    2015-03-26

    using MapReduce was described by Moise et al. in their paper, “Indexing and searching 100 million images with map-reduce” [9]. They modified the...D. Moise , D. Shestakov, G. Gudmundsson, and L. Amsaleg, “Indexing and Searching 100M Images with Map-Reduce,” in Proceedings of the 3rd ACM con

  11. Practical Pseudo-collisions for Hash Functions ARIRANG-224/384

    DEFF Research Database (Denmark)

    Guo, Jian; Matusiewicz, Krystian; Knudsen, Lars Ramkilde

    2009-01-01

    In this paper we analyse the security of the SHA-3 candidate ARIRANG. We show that bitwise complementation of whole registers turns out to be very useful for constructing high-probability differential characteristics in the function. We use this approach to find near-collisions with Hamming weigh...

  12. Security enhanced multi-factor biometric authentication scheme using bio-hash function.

    Science.gov (United States)

    Choi, Younsung; Lee, Youngsook; Moon, Jongho; Won, Dongho

    2017-01-01

    With the rapid development of personal information and wireless communication technology, user authentication schemes have been crucial to ensure that wireless communications are secure. As such, various authentication schemes with multi-factor authentication have been proposed to improve the security of electronic communications. Multi-factor authentication involves the use of passwords, smart cards, and various biometrics to provide users with the utmost privacy and data protection. Cao and Ge analyzed various authentication schemes and found that Younghwa An's scheme was susceptible to a replay attack where an adversary masquerades as a legal server and a user masquerading attack where user anonymity is not provided, allowing an adversary to execute a password change process by intercepting the user's ID during login. Cao and Ge improved upon Younghwa An's scheme, but various security problems remained. This study demonstrates that Cao and Ge's scheme is susceptible to a biometric recognition error, slow wrong password detection, off-line password attack, user impersonation attack, ID guessing attack, a DoS attack, and that their scheme cannot provide session key agreement. Then, to address all weaknesses identified in Cao and Ge's scheme, this study proposes a security enhanced multi-factor biometric authentication scheme and provides a security analysis and formal analysis using Burrows-Abadi-Needham logic. Finally, the efficiency analysis reveals that the proposed scheme can protect against several possible types of attacks with only a slightly high computational cost.

  13. Effects of whey and molasses as silage additives on potato hash ...

    African Journals Online (AJOL)

    p2492989

    20 mg copper; 20 mg iodine; 189 g calcium; 50 g phosphorus; 6000 000 I.U. ..... of corn silage by pineapple by-products on ruminal degradability in beef cattle. .... between the herbage and the course of fermentation in ensiling green fodder.

  14. Cost analysis of hash collisions : will quantum computers make SHARCS obsolete?

    NARCIS (Netherlands)

    Bernstein, D.J.

    2009-01-01

    Current proposals for special-purpose factorization hardware will become obsolete if large quantum computers are built: the number-field sieve scales much more poorly than Shor's quantum algorithm for factorization. Will all special-purpose cryptanalytic hardware become obsolete in a post-quantum

  15. Commentary: The Hash House Harriers and the winding path to materials discovery

    International Nuclear Information System (INIS)

    Canfield, Paul C.

    2015-01-01

    Materials science research can be both very demanding and extremely rewarding. In this Commentary, in my own research of new electronic and magnetic materials, I give numerous exemplars of the path followed to materials discovery. I also highlight the parallels between my research experiences with the pastime of running. I hope that my thoughts will help guide junior researchers along the often tortuous and exciting path to new materials and that I can teach them to be open minded and persistent about following new lines of discovery. “No-pain, no-gain” applies to many things in life, running and scientific research being just two examples, but I hope in the case of scientific research that I can convince you the gain normally outweighs the pain

  16. Fast Implementation of Two Hash Algorithms on nVidia CUDA GPU

    OpenAIRE

    Lerchundi Osa, Gorka

    2009-01-01

    Projecte fet en col.laboració amb Norwegian University of Science and Technology. Department of Telematics User needs increases as time passes. We started with computers like the size of a room where the perforated plaques did the same function as the current machine code object does and at present we are at a point where the number of processors within our graphic device unit it’s not enough for our requirements. A change in the evolution of computing is looming. We are in a t...

  17. A case of butane hash oil (marijuana wax)-induced psychosis.

    Science.gov (United States)

    Keller, Corey J; Chen, Evan C; Brodsky, Kimberly; Yoon, Jong H

    2016-01-01

    Marijuana is one of the most widely used controlled substances in the United States. Despite extensive research on smoked marijuana, little is known regarding the potential psychotropic effects of marijuana "wax," a high-potency form of marijuana that is gaining in popularity. The authors present a case of "Mr. B," a 34-year-old veteran who presented with profound psychosis in the setting of recent initiation of heavy, daily marijuana wax use. He exhibited incoherent speech and odd behaviors and appeared to be in a dream-like state with perseverating thoughts about his combat experience. His condition persisted despite treatment with risperidone 4 mg twice a day (BID), but improved dramatically on day 8 of hospitalization with the return of baseline mental function. Following discharge, Mr. B discontinued all marijuana use and did not exhibit the return of any psychotic symptoms. This study highlights the need for future research regarding the potential medical and psychiatric effects of new, high-potency forms of marijuana. Could cannabis have a dose-dependent impact on psychosis? What other potential psychiatric effects could emerge heretofore unseen in lower potency formulations? Given the recent legalization of marijuana, these questions merit timely exploration.

  18. An Analysis of the Applicability of Federal Law Regarding Hash-Based Searches of Digital Media

    Science.gov (United States)

    2014-06-01

    similarity matching, Fourth Amend- ment, federal law, search and seizure, warrant search, consent search, border search. 15. NUMBER OF PAGES 107 16. PRICE ...containing a white powdery substance labeled flour [53]. 3.3.17 United States v Heckenkamp 482 F.3d 1142 (9th Circuit 2007) People have a reasonable

  19. Effects of a bacterial inoculant on potato hash silage quality, growth ...

    African Journals Online (AJOL)

    nkosi

    South African Journal of Animal Science 2010, 40 (Issue 5, Supplement 1) ... Growth performance of feedlot weaners cattle fed diet containing different levels of cold press .... iodine 0.025 g, cobalt 0.25 g; magnesium 150 g and selenium 0.3 g.

  20. An update on the side channel cryptanalysis of MACs based on cryptographic hash functions

    DEFF Research Database (Denmark)

    Gauravaram, Praveen; Okeya, Katsuyuki

    2007-01-01

    Okeya has established that HMAC/NMAC implementations based on only Matyas-Meyer-Oseas (MMO) PGV scheme and his two refined PGV schemes are secure against side channel DPA attacks when the block cipher in these constructions is secure against these attacks. The significant result of Okeya's analys...

  1. InChIKey collision resistance: an experimental testing

    Directory of Open Access Journals (Sweden)

    Pletnev Igor

    2012-12-01

    Full Text Available Abstract InChIKey is a 27-character compacted (hashed version of InChI which is intended for Internet and database searching/indexing and is based on an SHA-256 hash of the InChI character string. The first block of InChIKey encodes molecular skeleton while the second block represents various kinds of isomerism (stereo, tautomeric, etc.. InChIKey is designed to be a nearly unique substitute for the parent InChI. However, a single InChIKey may occasionally map to two or more InChI strings (collision. The appearance of collision itself does not compromise the signature as collision-free hashing is impossible; the only viable approach is to set and keep a reasonable level of collision resistance which is sufficient for typical applications. We tested, in computational experiments, how well the real-life InChIKey collision resistance corresponds to the theoretical estimates expected by design. For this purpose, we analyzed the statistical characteristics of InChIKey for datasets of variable size in comparison to the theoretical statistical frequencies. For the relatively short second block, an exhaustive direct testing was performed. We computed and compared to theory the numbers of collisions for the stereoisomers of Spongistatin I (using the whole set of 67,108,864 isomers and its subsets. For the longer first block, we generated, using custom-made software, InChIKeys for more than 3 × 1010 chemical structures. The statistical behavior of this block was tested by comparison of experimental and theoretical frequencies for the various four-letter sequences which may appear in the first block body. From the results of our computational experiments we conclude that the observed characteristics of InChIKey collision resistance are in good agreement with theoretical expectations.

  2. Fast and Lean Immutable Multi-Maps on the JVM based on Heterogeneous Hash-Array Mapped Tries

    NARCIS (Netherlands)

    M.J. Steindorfer (Michael); J.J. Vinju (Jurgen)

    2016-01-01

    textabstractAn immutable multi-map is a many-to-many thread-friendly map data structure with expected fast insert and lookup operations. This data structure is used for applications processing graphs or many-to-many relations as applied in static analysis of object-oriented systems. When

  3. Fast and Lean Immutable Multi-Maps on the JVM based on Heterogeneous Hash-Array Mapped Tries

    OpenAIRE

    Steindorfer, Michael J.; Vinju, Jurgen J.

    2016-01-01

    textabstractAn immutable multi-map is a many-to-many thread-friendly map data structure with expected fast insert and lookup operations. This data structure is used for applications processing graphs or many-to-many relations as applied in static analysis of object-oriented systems. When processing such big data sets the memory overhead of the data structure encoding itself is a memory usage bottleneck. Motivated by reuse and type-safety, libraries for Java, Scala and Clojure typically implem...

  4. Conception of one through the implied sense of Philosophy, Sociology, Psychoanalisis and Logotherapy

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Freitas

    2015-10-01

    Full Text Available The main goal of this current article is identifying the conception of the one in the disciplines of philosophy, sociology, psychoanalisis and logotherapy; in the perspective of the subject’s objective choice. In order to accomplish the study of such conceptions, one and its choice, besides the specialists from each discipline already mentioned, distinct author’s conceptions have also been considered. Their overlay contributes to a better understanding of current interests concerning the process of choice by each person, as well as the basic principles which lead one. As result, the standpoint that leads one to choose an object, isn’t necessarily identical to the objective importance that the object has itself: an object of choice such as vocation, may be not only a value itself, but also chosen by deeply personal reasons and interests.

  5. The effect of non-financial risk information on the evaluation of implied cost of capitals

    OpenAIRE

    Norio Kitagawa; Hyonok Kim; Masatoshi Goto

    2011-01-01

    The purpose of this paper is to examine the effect of voluntary disclosure of `business risk' information (hereafter referred to as `risk information' ), which is a significant determinant of the information environment, on estimating the cost of capital. Recently, some studies indicate that the reliability of the cost of capital estimation differs according to the accounting standards and the information environment of the firm (e.g. Chen et al., 2004; Easton and Monahan, 2005). On the basis...

  6. Genome comparison implies the role of Wsm2 in membrane trafficking and protein degradation

    Directory of Open Access Journals (Sweden)

    Guorong Zhang

    2018-04-01

    Full Text Available Wheat streak mosaic virus (WSMV causes streak mosaic disease in wheat (Triticum aestivum L. and has been an important constraint limiting wheat production in many regions around the world. Wsm2 is the only resistance gene discovered in wheat genome and has been located in a short genomic region of its chromosome 3B. However, the sequence nature and the biological function of Wsm2 remain unknown due to the difficulty of genetic manipulation in wheat. In this study, we tested WSMV infectivity among wheat and its two closely related grass species, rice (Oryza sativa and Brachypodium distachyon. Based on the phenotypic result and previous genomic studies, we developed a novel bioinformatics pipeline for interpreting a potential biological function of Wsm2 and its ancestor locus in wheat. In the WSMV resistance tests, we found that rice has a WMSV resistance gene while Brachypodium does not, which allowed us to hypothesize the presence of a Wsm2 ortholog in rice. Our OrthoMCL analysis of protein coding genes on wheat chromosome 3B and its syntenic chromosomes in rice and Brachypodium discovered 4,035 OrthoMCL groups as preliminary candidates of Wsm2 orthologs. Given that Wsm2 is likely duplicated through an intrachromosomal illegitimate recombination and that Wsm2 is dominant, we inferred that this new WSMV-resistance gene acquired an activation domain, lost an inhibition domain, or gained high expression compared to its ancestor locus. Through comparison, we identified that 67, 16, and 10 out of 4,035 OrthoMCL orthologous groups contain a rice member with 25% shorter or longer in length, or 10 fold more expression, respectively, than those from wheat and Brachypodium. Taken together, we predicted a total of 93 good candidates for a Wsm2 ancestor locus. All of these 93 candidates are not tightly linked with Wsm2, indicative of the role of illegitimate recombination in the birth of Wsm2. Further sequence analysis suggests that the protein products of Wsm2 may combat WSMV disease through a molecular mechanism involving protein degradation and/or membrane trafficking. The 93 putative Wsm2 ancestor loci discovered in this study could serve as good candidates for future genetic isolation of the true Wsm2 locus.

  7. Fast computation of vanilla prices in time-changed models and implied volatilities using rational approximations

    NARCIS (Netherlands)

    Pistorius, M.; Stolte, J.

    2012-01-01

    We present a new numerical method to price vanilla options quickly in time-changed Brownian motion models. The method is based on rational function approximations of the Black-Scholes formula. Detailed numerical results are given for a number of widely used models. In particular, we use the

  8. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    Science.gov (United States)

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  9. Disintegration of collagen fibrils by Glucono-δ-lactone: An implied lead for disintegration of fibrosis.

    Science.gov (United States)

    Jayamani, Jayaraman; Ravikanth Reddy, R; Madhan, Balaraman; Shanmugam, Ganesh

    2018-02-01

    Excess accumulation of collagen (fibrosis) undergoes self-aggregation, which leads to fibrillar collagen, on the extracellular matrix is the hallmark of a number of diseases such as keloids, hypertrophic scars, and systemic scleroderma. Direct inhibition or disintegration of collagen fibrils by small molecules offer a therapeutic approach to prevent or treat the diseases related to fibrosis. Herein, the anti-fibrotic property of Glucono-δ-lactone (GdL), known as acidifier, on the fibrillation and its disintegration of collagen was investigated. As collagen fibrillation is pH dependent, the pH modulation property of GdL is attractive to inhibit self-association of collagen. Optical density and microscopic data indicate that GdL elicits concentration-dependent fibril inhibition and also disintegrates pre-formed collagen fibrils. The simultaneous pH analysis showed that the modulation(lowering) of pH by GdL is the primary cause for its anti-fibrotic activity. The intact triple helical structure of collagen upon treatment of GdL suggests that collagen fibril disintegration can be achieved without affecting the native structure of collagen which is essential for any anti-fibrotic agents. Saturation transfer difference (STD) NMR result reveals that GdL is in proximity to collagen. The present results thus suggest that GdL provides a lead to design novel anti-fibrotic agents for the pathologies related to collagen deposition. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Observed forest sensitivity to climate implies large changes in 21st century North American forest growth.

    Science.gov (United States)

    Charney, Noah D; Babst, Flurin; Poulter, Benjamin; Record, Sydne; Trouet, Valerie M; Frank, David; Enquist, Brian J; Evans, Margaret E K

    2016-09-01

    Predicting long-term trends in forest growth requires accurate characterisation of how the relationship between forest productivity and climatic stress varies across climatic regimes. Using a network of over two million tree-ring observations spanning North America and a space-for-time substitution methodology, we forecast climate impacts on future forest growth. We explored differing scenarios of increased water-use efficiency (WUE) due to CO2 -fertilisation, which we simulated as increased effective precipitation. In our forecasts: (1) climate change negatively impacted forest growth rates in the interior west and positively impacted forest growth along the western, southeastern and northeastern coasts; (2) shifting climate sensitivities offset positive effects of warming on high-latitude forests, leaving no evidence for continued 'boreal greening'; and (3) it took a 72% WUE enhancement to compensate for continentally averaged growth declines under RCP 8.5. Our results highlight the importance of locally adapted forest management strategies to handle regional differences in growth responses to climate change. © 2016 John Wiley & Sons Ltd/CNRS.

  11. Fossil diatoms imply common cometary origin of space-dust and the Polonnaruwa meteorite

    Science.gov (United States)

    Miyake, N.; Wallis, M. K.; Wickramasinghe, N. C.

    2013-09-01

    IDPs collected in 2001 at 40km altitude by cryosamplers studied via scanning electron microscopy and EDX were found to contain siliceous fibres and whiskers, some isolated but often embedded in a mineral matrix. The newly-arrived Polonnaruwa meteorite gives strong evidence for the hypothesis that they are fragments of diatoms agglomerating on solar system icy bodies. Diatom frustules and even whole diatom skeletons are identifiable within the meteorite. Specimens of a siliceous exoskeleton with multiple spines/whiskers have also been found, thought to be freshwater diatoms. As diatoms are dependent on a source of nitrogenous organics, the siliceous whiskers within IDPs would be an indicator of a photosynthesizing ecosystem, probably on a comet.

  12. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

      The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  13. Observations of oxidation products above a forest imply biogenic emissions of very reactive compounds

    Directory of Open Access Journals (Sweden)

    R. Holzinger

    2005-01-01

    Full Text Available Vertical gradients of mixing ratios of volatile organic compounds have been measured in a Ponderosa pine forest in Central California (38.90° N, 120.63° W, 1315m. These measurements reveal large quantities of previously unreported oxidation products of short lived biogenic precursors. The emission of biogenic precursors must be in the range of 13-66µmol m-2h-1 to produce the observed oxidation products. That is 6-30 times the emissions of total monoterpenes observed above the forest canopy on a molar basis. These reactive precursors constitute a large fraction of biogenic emissions at this site, and are not included in current emission inventories. When oxidized by ozone they should efficiently produce secondary aerosol and hydroxyl radicals.

  14. The quarter-power scaling model does not imply size-invariant hydraulic resistance in plants

    Science.gov (United States)

    Annikki Makela; Harry T. Valentine

    2006-01-01

    West, Brown, and Enquist (1997, 1999) propose an integrated model of the structure and allometry of plant vascular systems, which has come to be known as the 'WBE model' (Enquist, 2002). The WBE model weaves together area-preserving branching (Leonardo da Vinci), elastic similarity (Greenhill, 1881), the constant ratio of foliage mass to sapwood area (...

  15. 77 FR 2975 - Roosevelt Water Conservation District; Notice of Termination of Exemption by Implied Surrender...

    Science.gov (United States)

    2012-01-20

    .... Project No.: 11572-001. c. Date Initiated: January 9, 2012. d. Exemptee: Roosevelt Water Conservation District. e. Name and Location of Project: The Roosevelt Water Conservation District Conduit Hydropower..., Roosevelt Water Conservation District, 2344 S. Higley Road, Gilbert, AZ 82595-4794, (480) 988-9586. [[Page...

  16. Interstrand cross-linking implies contrasting structural consequences for DNA: insights from molecular dynamics

    Czech Academy of Sciences Publication Activity Database

    Bignon, E.; Dršata, Tomáš; Morell, C.; Lankaš, Filip; Dumont, E.

    2017-01-01

    Roč. 45, č. 4 (2017), s. 2188-2195 ISSN 0305-1048 R&D Projects: GA ČR(CZ) GA14-21893S Institutional support: RVO:61388963 Keywords : abasic sites * duplex DNA * mechanical properties Subject RIV: BO - Biophysics OBOR OECD: Biophysics Impact factor: 10.162, year: 2016 https://academic.oup.com/nar/article-lookup/doi/10.1093/nar/gkw1253

  17. Discovery of rapid whistlers close to Jupiter implying lightning rates similar to those on Earth

    Science.gov (United States)

    Kolmašová, Ivana; Imai, Masafumi; Santolík, Ondřej; Kurth, William S.; Hospodarsky, George B.; Gurnett, Donald A.; Connerney, John E. P.; Bolton, Scott J.

    2018-06-01

    Electrical currents in atmospheric lightning strokes generate impulsive radio waves in a broad range of frequencies, called atmospherics. These waves can be modified by their passage through the plasma environment of a planet into the form of dispersed whistlers1. In the Io plasma torus around Jupiter, Voyager 1 detected whistlers as several-seconds-long slowly falling tones at audible frequencies2. These measurements were the first evidence of lightning at Jupiter. Subsequently, Jovian lightning was observed by optical cameras on board several spacecraft in the form of localized flashes of light3-7. Here, we show measurements by the Waves instrument8 on board the Juno spacecraft9-11 that indicate observations of Jovian rapid whistlers: a form of dispersed atmospherics at extremely short timescales of several milliseconds to several tens of milliseconds. On the basis of these measurements, we report over 1,600 lightning detections, the largest set obtained to date. The data were acquired during close approaches to Jupiter between August 2016 and September 2017, at radial distances below 5 Jovian radii. We detected up to four lightning strokes per second, similar to rates in thunderstorms on Earth12 and six times the peak rates from the Voyager 1 observations13.

  18. Large arteriolar component of oxygen delivery implies a safe margin of oxygen supply to cerebral tissue.

    Science.gov (United States)

    Sakadžić, Sava; Mandeville, Emiri T; Gagnon, Louis; Musacchia, Joseph J; Yaseen, Mohammad A; Yucel, Meryem A; Lefebvre, Joel; Lesage, Frédéric; Dale, Anders M; Eikermann-Haerter, Katharina; Ayata, Cenk; Srinivasan, Vivek J; Lo, Eng H; Devor, Anna; Boas, David A

    2014-12-08

    What is the organization of cerebral microvascular oxygenation and morphology that allows adequate tissue oxygenation at different activity levels? We address this question in the mouse cerebral cortex using microscopic imaging of intravascular O2 partial pressure and blood flow combined with numerical modelling. Here we show that parenchymal arterioles are responsible for 50% of the extracted O2 at baseline activity, and the majority of the remaining O2 exchange takes place within the first few capillary branches. Most capillaries release little O2 at baseline acting as an O2 reserve that is recruited during increased neuronal activity or decreased blood flow. Our results challenge the common perception that capillaries are the major site of O2 delivery to cerebral tissue. The understanding of oxygenation distribution along arterio-capillary paths may have profound implications for the interpretation of blood-oxygen-level dependent (BOLD) contrast in functional magnetic resonance imaging and for evaluating microvascular O2 delivery capacity to support cerebral tissue in disease.

  19. Small-scale microwave background anisotropies implied by large-scale data

    Science.gov (United States)

    Kashlinsky, A.

    1993-01-01

    In the absence of reheating microwave background radiation (MBR) anisotropies on arcminute scales depend uniquely on the amplitude and the coherence length of the primordial density fluctuations (PDFs). These can be determined from the recent data on galaxy correlations, xi(r), on linear scales (APM survey). We develop here expressions for the MBR angular correlation function, C(theta), on arcminute scales in terms of the power spectrum of PDFs and demonstrate their accuracy by comparing with detailed calculations of MBR anisotropies. We then show how to evaluate C(theta) directly in terms of the observed xi(r) and show that the APM data give information on the amplitude, C(O), and the coherence angle of MBR anisotropies on small scales.

  20. Shapes of Venusian 'pancake' domes imply episodic emplacement and silicic composition

    Science.gov (United States)

    Fink, Jonathan H.; Bridges, Nathan T.; Grimm, Robert E.

    1993-01-01

    The main evidence available for constraining the composition of the large circular 'pancake' domes on Venus is their gross morphology. Laboratory simulations using polyethylene glycol show that the height to diameter (aspect) ratios of domes of a given total volume depend critically on whether their extrusion was continuous or episodic, with more episodes leading to greater cooling and taller domes. Thus without observations of their emplacement, the compositions of Venusian domes cannot be uniquely constrained by their morphology. However, by considering a population of 51 Venusian domes to represent a sampling of many stages during the growth of domes with comparable histories, and by plotting aspect ratio versus total volume, we find that the shapes of the domes are most consistent with episodic emplacement. On Earth this mode of dome growth is found almost exclusively in lavas of dacite to rhyolite composition, strengthening earlier inferences about the presence of evolved magmas on Venus.